Reading depth value of transparent plane with glReadPixels and gluUnProject - opengl

I am trying to create a billiards simulation and have been using glReadPixels along with gluUnProject to project my mouse pointer into the scene.
This works fine if the mouse is pointing at an object in the scene (the table for instance) but when it points to the background it messes up the gluUnProject due to the glReadPixels call returning 1.0.
I'm trying to figure out how to draw a transparent plane at the same level of the table so that no matter where I point the mouse in my scene, it will get the depth as if it were pointing onto the same plane as the table.
If I draw a transparent quad without glAlphaFunc(GL_GREATER, 0.01f); it will draw the quad as white and the depth testing will work as I planned, but when I add in the call to alphaFunc to get the quad to be transparent, the depth goes back to what it was before. From what I've seen, glReadPixels reads the pixels from the frame buffer, so this makes sense, I'm just wondering how I can work around this.
I've also tried reversing the winding on the quad so that it wouldn't be visible from above, but this has the same problem of glReadPixels taking it's measurements from the framebuffer.
In short, how do I get glReadPixels to get it's depth component from an object without drawing that object to the screen?
Here's the calls to glReadPixels and gluUnProject:
winX = (float)x;
winY = (float)viewport[3] - (float)y;
glReadPixels(x, (int) winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
gluUnProject(winX, winY, winZ, modelview, projection, viewport, &posX, &posY, &posZ);

That is because alpha testing fully discards pixels that don't pass the test.
However gluUnProject should not choke on a depth buffer value of 1.0; what you should get as value for a depth buffer value of 1.0 is the distance of the far clipping plane.

You can completely disable writes to the color buffer using
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
If you draw something after this, it will still get depth tested (if that's turned on) and its depth values will still be written to the depth buffer (if that's turned on) but none of its fragments will be visible in the color buffer.

Related

How to get accurate 3D depth from 2D screen mouse click for large scale object in OpenGL?

I am computing the 3D coordinates from the 2D screen mouse click. Then I draw point at the computed 3D coordinate. Nothing is wrong in the code, nothing is wrong in the method, everything is working fine. But there is one issue which is relevant to depth.
If the object size is around (1000, 1000, 1000), I get the full depth, the exact 3D coordinate of the object's surfel. But when I load the object with size (20000, 20000, 20000), I do not the get the exact (depth) 3D coordinates. I get some offset from the surface. The point draws with a some offset from the surface. So my first question is that why it is happening? and the second question is how can I get the full depth and the accurate 3D coordinate for very large scale objects?
I draw a 3D point with
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 0.999999);
and using
glReadPixels(x, y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &wz);
if(wz > 0.0000001f && wz < 0.999999f)
{
gluUnProject()....saving 3D coordinate
}
The reason why this happens is the limited precision of the depth buffer. A 8-bit depth buffer can, for example, only store 2^8=256 different depth values.
The second parameter set that effects the depth precision are the settings for near and far plane in the projection, since this is the range that has to be mapped to the available data values in the depth buffer. If one sets this range to [0, 100] using a 8-bit depth buffer, then the actual precision is 100/256 = ~0.39, which means, that approximately 0.39 units in eye space will have the same depth value assigned.
Now to your problem: Most probably you have too less bits assigned to the depth buffer. As described above this introduces an error since the exact depth value can not be stored. This is why the points are close to the surface, but not exactly on it.
I have solved this issue that was happening because of depth range. Because OpenGL is a state machine, so I think I might somewhere changed the depth range which should be from 0.0 to 1.0. I think its always better to put depth range just after clearing the depth buffer, I used to have following settings just after clearing the depth and color buffers.
Solution:
{
glClearColor(0.0,0.0,0.0,0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glDepthRange(0.0, 1.0);
glDepthMask(GL_TRUE);
}

full transparent object in openGL

I need to create a completely transparent surface passing through the origin of the axes and always parallel to the screen.
I'm trying to use this code (in c++) but the result is something like 50% blend (not completely):
glPushMatrix();
glLoadIdentity();
glBlendFunc(1, 1);
glBegin(GL_QUADS);
glVertex3f(-100, -100, -0.003814);
glVertex3f(100, -100, -0.003814);
glVertex3f(100, 100, -0.003814);
glVertex3f(-100, 100, -0.003814);
glEnd();
glPopMatrix();
Additional informations: I need this transparent surface to get a point on it with the function gluUnProject(winX, winY, winZ, model, proj, view, &ox, &oy, &oz);
If you just want to fill the depth buffer you can disable color writes via glColorMask():
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE );
drawQuad();
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE );
doUnproject();
Additional informations: I need this transparent surface to get a point on it with the function gluUnProject(winX, winY, winZ, model, proj, view, &ox, &oy, &oz);
No you don't. OpenGL is not a scene graph and there's no obligation to use values with gluUnProject obtained from OpenGL. You can simply pass in whatever you want for winZ. Use 0 if you're interested in the near clipping plane and 1 for the far clipping plane. Also you can perfectly fine just calculate the on-screen position for every point. OpenGL is not magic, the transformations it does are well documented and easy to perform yourself.
The blend function you are using is known as additive blending:
Final Color = (SourceColor * 1.0) + (DestinationColor * 1.0).
This is anything but fully transparent, unless the framebuffer is already fully white at the location you are blending (DestinationColor == (1.0, 1.0, 1.0)). And even then this behavior only works if you are using a fixed-point render target, because values are clamped to [0.0,1.0] after blending by default.
Instead, consider glBlendFunc (GL_ZERO, GL_ONE):
Final Color = (SourceColor * 0.0) + (DestinationColor * 1.0).
[...]
Final Color = Original Color
That said, you will probably get better performance if you simply use a color mask to disable color writes as genpfault suggested. Using a blending function to discard the color of your surface is needlessly complicated.

OpenGL non-square textures

I'm a little new to OpenGL. I am making a 2D application, and I defined a Quad class which defines a square with a texture on it. It loads these textures from a texture atlas, and it does this correctly. Everything works with regular textures, and the textures display correctly, but doesn't display correctly when the texture image is not a square.
For example, I want a Quad to have a star texture, and have the star to show up, and the area around the star image that still lies in the Quad to be transparent. But what ends up happening is that the star shows up fine, and then behind it is another texture from my texture atlas that fills the Quad. I assume the texture behind it is just the last texture I loaded into the system? Either way, I don't want that texture to show up.
Here's what I mean. I want the star but not the cloud-ish texture behind it showing up:
The important part of my render function is:
glDisable(GL_CULL_FACE);
glVertexPointer(vertexStride, GL_FLOAT, 0, vertexes);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(colorStride, GL_FLOAT, 0, colors);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textureID);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, uvCoordinates);
//render
glDrawArrays(renderStyle, 0, vertexCount);
It seems like the obvious choice would be to use an RGBA texture, and make everything but the star transparent by setting the alpha channel to zero for those pixels (and enable alpha blending for the texture unit).
Use an image manipulation program. Photoshop is a great one, gimp is a free one. You don't really use OpenGL to crop your textures. Rather, your textures need to be prepared beforehand for your program.
There should be some sort of very easy tool to remove everything outside of the star. By remove, I mean make it transparent, which will require an alpha channel. This means you need to make sure that the way you load your textures in your program takes into account 32-bit colors (RGBA - red, green, blue, alpha), not just 24-bit colors (RGB - red, green, blue).
This will make everything behind your star see-through, or transparent.
Also, just an afterthought, it looks like you could be taking a copyrighted image off the internet and using it in your game/program. If you're doing anything commercial, I'd strongly recommend creating your own textures.
You want to make a call to glBindTexture(GL_TEXTURE_2D,0); after you have mapped your texture
here is an example from some code ive written
// Bind the texture
glBindTexture(GL_TEXTURE_2D, image.getID());
// Draw a QUAD with setting texture coordinates
glBegin(GL_QUADS);
{
// Top left corner of the texture
glTexCoord2f(0, 0);
glVertex2f(x, y);
// Top right corner of the texture
glTexCoord2f(image.getRelativeWidth(), 0);
glVertex2f(x+image.getImageWidth(), y);
// Bottom right corner of the texture
glTexCoord2f(image.getRelativeWidth(), image.getRelativeHeight());
glVertex2f(x+image.getImageWidth()-20, y+image.getImageHeight());
// Bottom left corner of the texture
glTexCoord2f(0, image.getRelativeHeight());
glVertex2f(x+20, y+image.getImageHeight());
}
glEnd();
glBindTexture(GL_TEXTURE_2D, 0);
I am no expert but this certainly solved what you are experiencing for me.

OpenGL : How can I put the skybox in the infinity

I need to know how can I make the skybox appears as it's in the infinity??
I know that it's something related to depth, but I don't know the exact thing to disable or to enable??
First, turn off depth writes/testing (you don't need to bother with turning off depth testing if you draw the skybox first and clear your depth buffer):
glDisable(GL_DEPTH_TEST);
glDepthMask(false);
Then, move the camera to the origin and rotate it the inverse of the modelview matrix:
// assume we're working with the modelview
glPushMatrix();
// inverseModelView is a 4x4 matrix with no translation and a transposed
// upper 3x3 portion from the regular modelview
glLoadMatrix(&inverseModelView);
Now, draw your sky box and turn depth writes back on:
DrawSkybox();
glPopMatrix();
glDepthMask(true);
glEnable(GL_DEPTH_TEST);
You'll probably want to use glPush/PopAttrib() to ensure your other states get correctly set after you draw the skybox too (make sure to turn off things like lighting or blending if necessary).
You should do this before drawing anything so all color buffer writes happen on top of your sky box.
First, Clear the buffer.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Then, save your current modelview matrix and load the identity.
glPushMatrix();
glLoadIdentity();
Then render your skybox.
Skybox.render();
Then, clear the depth buffer and continue normally with rendering
glClear(GL_DEPTH_BUFFER_BIT);
OtherStuff.render();
glutSwapBuffers();
The only problem with drawing the sky box is first is that your pixel shader will execute for every pixel in the sky box. Just to be overwritten by other object in your world later on. Your best bet is to render all opaque object first then render your sky box. That way the pixel shader for the sky box only gets executed for the pixel who pass the z buffer test.
There is no infinity. A skybox is just a textured box, with normaly 0,0,0 in the middle.
Here is a short tut: link text
The best approach I can think of is to draw it on a first pass(or layer), then clear only the depth buffer. After that just draw the rest of the scene in another pass. This way the skybox will always remain "behind" the scene. Just remember to use the same camera for both passes and somehow snap the skybox to the camera.

OpenGL Frame Buffer Object for rendering to textures, renders weirdly

I'm using python but OpenGL is pretty much done exactly the same way as in any other language.
The problem is that when I try to render a texture or a line to a texture by means of a frame buffer object, it is rendered upside down, too small in the bottom left corner. Very weird. I have these pictures to demonstrate:
This is how it looks,
www.godofgod.co.uk/my_files/Incorrect_operation.png
This is how it did look when I was using pygame instead. Pygame is too slow, I've learnt. My game would be unplayable without OpenGL's speed. Ignore the curved corners. I haven't implemented those in OpenGL yet. I need to solve this issue first.
www.godofgod.co.uk/my_files/Correct_operation.png
I'm not using depth.
What could cause this erratic behaviour. Here's the code (The functions are indented in the actual code. It does show right), you may find useful,
def texture_to_texture(target,surface,offset): #Target is an object of a class which contains texture data. This texture should be the target. Surface is the same but is the texture which should be drawn onto the target. offset is the offset where the surface texture will be drawn on the target texture.
#This will create the textures if not already. It will create textures from image data or block colour. Seems to work fine as direct rendering of textures to the screen works brilliantly.
if target.texture == None:
create_texture(target)
if surface.texture == None:
create_texture(surface)
frame_buffer = glGenFramebuffersEXT(1)
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frame_buffer)
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, target.texture, 0) #target.texture is the texture id from the object
glPushAttrib(GL_VIEWPORT_BIT)
glViewport(0,0,target.surface_size[0],target.surface_size[1])
draw_texture(surface.texture,offset,surface.surface_size,[float(c)/255.0 for c in surface.colour]) #The last part changes the 0-255 colours to 0-1 The textures when drawn appear to have the correct colour. Don't worry about that.
glPopAttrib()
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0)
glDeleteFramebuffersEXT(1, [int(frame_buffer)]) #Requires the sequence of the integer conversion of the ctype variable, meaning [int(frame_buffer)] is the odd required way to pass the frame buffer id to the function.
This function may also be useful,
def draw_texture(texture,offset,size,c):
glMatrixMode(GL_MODELVIEW)
glLoadIdentity() #Loads model matrix
glColor4fv(c)
glBegin(GL_QUADS)
glVertex2i(*offset) #Top Left
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
glColor4fv((1,1,1,1))
glBindTexture(GL_TEXTURE_2D, texture)
glBegin(GL_QUADS)
glTexCoord2f(0.0, 0.0)
glVertex2i(*offset) #Top Left
glTexCoord2f(0.0, 1.0)
glVertex2i(offset[0],offset[1] + size[1]) #Bottom Left
glTexCoord2f(1.0, 1.0)
glVertex2i(offset[0] + size[0],offset[1] + size[1]) #Bottom, Right
glTexCoord2f(1.0, 0.0)
glVertex2i(offset[0] + size[0],offset[1]) #Top, Right
glEnd()
You don't show your projection matrix, so I'll assume it's identity too.
OpenGL framebuffer origin is bottom left, not top left.
The size issue is more difficult to explain. What is your projection matrix after all ?
also, you don't show how to use the texture, and I'm not sure what we're looking at in your "incorrect" image.
Some non-related comments:
creating a framebuffer each frame is not the right way to go about it.
come to think about it, why use framebuffer at all ? it seems that the only thing you're after is blending to the frame buffer ? glEnable(GL_BLEND) does that just fine.