While trying to render a cube with OpenGL I noticed that the vertexes I defined later in my positions array rendered on top of the ones defined earlier:
It is hard to see in the image but if you look closely, you can see the back faces of the cube are rendering on top of the faces closest to the camera. When I move the camera to the other side of the cube it appears normally.
Do I need a check in my fragment shader to see if the face is behind others relative to the camera, and if so how do I implement it?
At first I tried enabling the depth test (I wasn't aware it was off by default) But then nothing rendered. The problem was not clearing the depth buffer every frame with glClear(GL_DEPTH_BUFFER_BIT).
Problem Solved!
Related
I want to draw transparent polygon(a pyramid for example). Some faces appear transparent where as some faces appear opaque.
Im drawing using GL_TRIANGLE_STRIP.
I have enabled blend mode, but no luck.
Please see the attached image,
This happens because of the draw order of the triangles. Some triangles get drawn first, these write their depth values to the depth texture, then the next triangle comes along and checks if there's something in front of it. If there is, then it won't render.
If a triangle that is at the back renders first, then there's no problem, the triangle in front of it looks at the depth texture, sees that it has a greater z value so it gets correctly rendered, these are the places where the color is less transparent.
The problem arises when the triangle at the front renders first. It writes it's depth value to the depth buffer, then the triangle in the back comes along, sees that there's already something in front of it, so it doesn't get rendered.
You have multiple ways to solve this, you can disable depth testing, sort the triangles so they come in order or use an algorithm like depth peeling. Each of these algorithms have side effects or are simply very complex, this is why you don't see too much transparency in games.
I'm trying to add a skybox to the world/camera/game and I don't know how to go about it. If someone could give me some guidance on how to apply it, it would be much appreciated.
I have already loaded the skybox, I just don't know how to draw it properly so it will fit around the camera as it moves.
I have managed to texture a sort of cube, which might be close to a skybox but then it's only visible from the outside. Once you enter the cube, you can't see it from the inside. Perhaps if I could invert the cube's faces, it will show when I'm inside the cube and I can make it larger?
From outside the cube looking at it
From inside looking out
I had a similar problem a few weeks back, if you are looking for some pseudo code I think I may be able to help. First of all using a cube isn't the best idea when rendering as your box won't look natural, map it to a sphere for a smooth effect.
Create a bounding sphere around your viewer that moves relative to your camera
Apply the texture on that sphere, this will give the impression that the sky is moving relative to you
When you are drawing, disable your z-buffer and frustum (assuming you're using any culling algorithm) this will allow the sky box to be drawn but will ensure terrain is drawn over the top of the sky box when depth sort algorithms are performed by OpenGL.
Note: Don't forget to re-enable the z-buffer after the sky box has been drawn, otherwise your terrain elements will appear outside of the sphere, meaning you will only see the Sky box.
I recently wrote a basic terrain engine in DirectX but the principals are fairly similar, if you'd like to view the repo you can find it here
Check out line 286 in this file to see how the Skybox is rendered, then also visit the SkyBox implementation file to see how it is constructed, and the SkyShader implementation file to see how the texture is mapped to the sphere, the main method to be concerned with in the shader file is SetShaderParameters()
In terms of moving the skybox relative to your camera, simply set the WVP matrix of your skybox to that of your camera, and then tweak the x, y, z planes of the skybox to your liking.
Extra If you are going to implement multi-player aspects, just disable back-face rendering for the sphere, then each player can see their SkyBox but opponents cannot. Alternatively you create one large sphere around the world
Hope that helps - if you need anymore help just ask, I know this stuff can be fairly dense at first:)
I'm drawing my scene to a texture using FBO and reading the pixel in order to select my object.
The problem is that the drawing into the texture is ignoring the depth
On Left is scene and right is the texture (saved it to file for debug reasons).
As u can see there are 2 planes one on top of another and the one in the front is more directed up. although on the texture it's the other way around. and this makes the user pick the plane in the background when he sees the other plane.
I'v tried to enable everything i thought of but i guess i'm missing something.
Thanks to ratchet freak,
I've realized that i skipped the depth buffer :)
I just had to create a render buffer for the depth component and attach it to the frame buffer object.
So Im trying to figure out the best way to render a 3D model in OpenGL when some of the textures applied to it have alpha channels.
When I have the depth buffer enabled, and start drawing all the triangles in a 3D model, if it draws a triangle that is in front of another triangle in the model, it will simply not render the back triangle when it gets to it. The problem is when the front triangle has alpha transparency, and should be able to be seen through to the triangle behind it, but the triangle behind is still not rendered.
Disabling the depth buffer eliminates that problem, but creates the obvious issue that if the triangle IS opaque, then it will still render triangles behind it on top if rendered after.
For example, I am trying to render a pine tree that is basically some cones stacked on top of each other that have a transparent base. The following picture shows the problem that arises when the depth buffer is enabled:
You can see how you can still see the outline of the transparent triangles.
The next picture shows what it looks like when the depth buffer is disabled.
Here you can see how some of the triangles on the back of the tree are being rendered in front of the rest of the tree.
Any ideas how to address this issue, and render the pine tree properly?
P.S. I am using shaders to render everything.
If you're not using any partial transparency (everything is either 0 or 255), you can glEnable(GL_ALPHA_TEST) and that should help you. The problem is that if you render the top cone first, it deposits the whole quad into the z-buffer (even the transparent parts), so the lower branches underneath get z-rejected when its their time to be drawn. Enabling alpha testing doesn't write pixels to the z buffer if they fail the alpha test (set with glAlphaFunc).
If you want to use partial transparency, you'll need to sort the order of rendering objects from back to front, or bottom to top in your case.
You'll need to leave z-buffer enabled as well.
[edit] Whoops I realized that those functions I don't believe work when you're using shaders. In the shader case you want to use the discard function in the fragment shader if the alpha value is close to zero.
if(color.a < 0.01) {
discard;
} else {
outcolor = color;
}
You needs to implement a two-pass algorithm.
The first pass render only the back faces, while the second pass render only the front faces.
In this way you don't need to order the triangles, but some artifacts may occour depending whether your geometry is convex or not.
I may be wrong, but this is because when you render in 3d you do no render the backside of triangles using Directx's default settings, when the Z is removed - it draws them in order, with the Z on it doesnt draw the back side of the triangles anymore.
It is possible to show both sides of the triangle, even with Z enabled, however I'm thinking there might be a reason its normally enabled.. such as speed..
Device->SetRenderState(D3DRS_CULLMODE, Value);
value can equal
D3DCULL_NONE - Shows both sides of triangle
D3DCULL_CW - Culls Front side of triangle
D3DCULL_CCW - Default state
I find this problem when I try to render multiple textures in a 3D model. I find the textures loaded later is overwriting the ones loaded earlier(say while doing a perspective projection ).Some of the textures are transparent. Is there any way one can get around this problem? Thanks.
If I understand your problem then the only solution is to break the model up into parts, sort the parts by the distance from the camera and draw them back to front.
Once you have drawn a primitive, it's drawn - using a depth buffer can reject pixels if they are further away than the previously drawn primitive at that position but it doesn't help for transparent primitives.