Opengl Depth buffer and Culling - opengl

Whats's the difference between use back face culling and a buffer of depth in OpenGL?

Backface culling is when OpenGL determines which faces are facing away from the viewer and are therefore unseen. Think of a cube. No matter how your rotate the cube, 3 faces will always be invisible. Figure out which faces these are, remove them from the list of polygons to be drawn and you just halved your drawing list.
Depth buffering is fairly simple. For every pixel of every polygon drawn, compare it's z value to the z buffer. if it's less than the value in the z buffer set the z buffer value as the new z buffer value. If not, discard the pixel. Depth buffering gives very good results but can be fairly slow as each and every pixel requires a value lookup.
In reality there is nothing similar between these two methods and they are often both used. Given a cube you can first cut out half the polygons using culling then draw them using z buffering.
Culling can cut down on the polygons rendered, but it's not a sorting algorithm. That's what Z buffering is.

A given triangle has two sides, the front face and the back face. The side you are looking at is determined by the order the points appear in the vertex list (also called the winding). Typically lists of triangles have alternating winding so that you can reuse the preceding two points but the facing of a given triangle in the strip doesn't alternate. Back face culling is the optimization step where in triangles in the scene which are oriented away from the view are removed from the list of triangles to draw.
A depth buffer (z-buffer) is used to hang onto the closest thing (the depth is relative to the view) that has already been rendered. If the thing that comes up next in the draw list is behind something that I've drawn already (ie, it has a depth that places it farther away) I can skip drawing it, as it is obstructed. If the new thing to draw is closer, I draw it and I update the depth buffer with the new, closer value.

Related

GL_TRIANGLE_STRIP and transparency problems

I want to draw transparent polygon(a pyramid for example). Some faces appear transparent where as some faces appear opaque.
Im drawing using GL_TRIANGLE_STRIP.
I have enabled blend mode, but no luck.
Please see the attached image,
This happens because of the draw order of the triangles. Some triangles get drawn first, these write their depth values to the depth texture, then the next triangle comes along and checks if there's something in front of it. If there is, then it won't render.
If a triangle that is at the back renders first, then there's no problem, the triangle in front of it looks at the depth texture, sees that it has a greater z value so it gets correctly rendered, these are the places where the color is less transparent.
The problem arises when the triangle at the front renders first. It writes it's depth value to the depth buffer, then the triangle in the back comes along, sees that there's already something in front of it, so it doesn't get rendered.
You have multiple ways to solve this, you can disable depth testing, sort the triangles so they come in order or use an algorithm like depth peeling. Each of these algorithms have side effects or are simply very complex, this is why you don't see too much transparency in games.

How to clip texture with arbitrary shape?

I am rendering complex 3d objects. Here is a simple example with a sphere-like object:
Next I am applying a clipping plane to these objects and rendering a texture on this plane, giving the impression you are looking at the inside of the object, as if it was sliced. For example:
The problem is the jagged edge of the texture. It will stick out passed the boundary of the surface. Here's another angle where you can see it sticking out. The surface and the texture both derive from the same source data, but the surface is smoothed and has a higher resolution than the texture.
What I want is to be able to somehow clip the texture, so that it never sticks out past the boundary of the surface. Also, I don't want to simply scale down the texture, since although this might prevent it from sticking outside, it would create interior gaps between the texture edge and the surface edge. I would rather the texture be a little too big and have it clipped so that it sits flush against the edge of the surface.
Here's where I am:
I figured the first step would be to define the intersection of the plane and the surface. So now I have that, as an ordered list of line segments. However, I'm not sure how to proceed with this info (or if this is even the best approach).
I've been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape and draw this into a stencil buffer. Then apply this when drawing the texture. (Although I think it's a lot of work since the shapes can be complicated.)
I am wondering if I can somehow use the already drawn surface (in conjunction with a stencil buffer or some other technique) to somehow clip the texture -- without having to go through the extra trouble of deriving the intersection line, etc.
What's the best approach here? (Any online examples you can point me to would also be really helpful.)
If you're clipping convex objects and know coordinates of clipped points, you can create polygonal "cap" yourself - just draw clipped points in proper order using GL_TRIANGLE_FAN, and that's it. Won't work with non-convex object - that would require triangulation algorithm. You could use glu tesselators to triangulate polygons, but that can be tricky/difficult.
If clipped area can be defined by formula, you can write a shader that'll precisely clip pixels over certain distance (i.e. if x^2+x^2+z^2 > r^2 do not draw pixel).
You could also draw back-facing faces with a shader that would draw every back facing pixel as if it were on on clip-plane using simple raytracing. That's complicated, and might be overkill in your case. Dead Rising used similar technique in their game engine.
Also you can use stencil buffer.
Draw back-facing faces first with GL_INCR (glStencilOp(GL_KEEP, GL_INCR, GL_INCR)), then draw front-facing surfaces with GL_DECR (glStencilOp(GL_KEEP, GL_DECR, GL_DECR)). Then draw texture only where stencil is non-zero. (glStencilFunc(GL_GREATER, 0, 0xff); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);). If you have many overlapping shapes, however, you'll need to take special care of them.
--edit--
However, I'm not sure how to proceed with this info (or if this is even the best approach).
Draw it as a triangle fan. For convex objects, that's all you need. For non-convex objects that won't work.
ve been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape
No, it won't work like that. Region you want to fill with texture should hold certain stencil value. That's how stencil clipping works.
to somehow clip the texture
In OpenGL you have 6(?) clip planes. If you need more than that, you'll need advanced techniques - stencil, deriving intersection line, shaders, or triangulation.
Any online examples you can point me to would also be really helpful
Drawing Filled, Concave Polygons Using the Stencil Buffer

Rasterizer not picking up GL_LINES as I would want it to

So I'm rendering this diagram each frame:
https://dl.dropbox.com/u/44766482/diagramm.png
Basically, each second it moves everything one pixel to the left and every frame it updates the rightmost pixel column with current data. So a lot of changes are made.
It is completely constructed from GL_LINES, always from bottom to top.
However those black missing columns are not intentional at all, it's just the rasterizer not picking them up.
I'm using integers for positions and bytes for colors, the projection matrix is exactly 1:1; translating by 1 means moving 1 pixel. Orthogonal.
So my problem is, how to get rid of the black lines? I suppose I could write the data to texture, but that seems expensive. Currently I use a VBO.
Render you columns as quads instead with a width of 1 pixel, the rasterization rules of OpenGL will make sure you have no holes this way.
Realize the question is already closed, but you can also get the effect you want by drawing your lines centered at 0.5. A pixel's CENTER is at 0.5, and drawing a line there will always be picked up by the rasterizer in the right place.

Back facing polygons in opengl

In OpenGL you can draw only back-facing polygons, only front facing polygons or both. If you render a manifold triangle mesh, then clear the frame-buffer but not the depth buffer, then again render only the back facing polygons. What do expect to see?
I think the following answer given to me is wrong:
You should see the back facing
triangles. The first render pass will
result in the depth buffer having the
depth values of the triangles that are
front facing. The second render pass
you are rendering the back facing
triangles, hence those that have the
greatest depth value. Every triangle
that is rasterized will have its depth
value compared to the current depth
value for that pixel. Since the depth
buffer is set to all the closest depth
values (small values) but is
discriminating on the farthest depth
values (large values) the back facing
triangles will be rendered."
But I think the answer is:
Since the depth buffer is not cleared, and still contains the depth values of the front facing triangles, it would throw out the back facing triangles, and display nothing.
Which answer is correct?
It depends! Assuming the mesh is of an object that is a 2-dimensional manifold (i.e. topologically equivalent to a plane over sufficiently small areas around any point on the surface) and the first pass renders front- and back-facing triangles or just front-facing ones, and the depth function is GL_LESS or GL_LEQUAL then the second paragraph is right, since the front-facing triangles are always in front of the back-facing triangles and hence will always cause the depth test to fail.
Of course, if you use GL_GREATER or GL_GEQUAL as your depth function, the reverse is true so the first paragraph is correct.
I think the second paragraph is false.
Imagine a Moebius band which is a closed manifold. You can see clearly some back facing triangles (in white, front facing are in black) that are closer to the eye. In the second pass they will pass the depth test and be rendered:

What is back face culling?

What exactly is back face culling in OpenGL? Can you give me a specific example with e.g. one triangle?
If you look carefully you can see examples of this in a lot of video games. Any time the camera accidentally moves through an object - typically a moving object like a character - notice how the world continues to render correctly. That's because the back sides of the triangles that form the skin of the character are not rendered; they are effectively transparent. If this were not the case then every time the camera accidentally moved inside an object either the screen would go black (because the interior of the object is not lit) or you'd get a bizarre perspective on what the skin of the object looks like from the inside.
Back face culling is where the triangles pointing away from the camera/viewpoint are not considered for drawing.
Wikipedia defines this as:
It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera and will not be drawn.
Other systems use the face normal and do the dot product with the view direction.
It is a relatively quick way of deciding whether to draw a triangle or not. Consider a cube. At any one time 3 of the sides of the cube are going to be facing away from the user and hence not visible. Even if these were drawn they would be obscured by the three "forward" facing sides. By performing back face culling you are reducing the number of triangles drawn from 12 to 6 (2 per side).
Back face culling works best with closed "solid" objects such as cubes, spheres, walls.
Some systems don't have this as they consider faces to be double sided and hence visible from either direction.
It's only and optimization technique.
When you look at a closed object, say a cube, you only see about half the faces : the faces that are towards you (or, at least, the faces that are not towards you are always occluded by another face that points towards you)
If you skip drawing all these backwards-facing faces, it will have two consequences :
- the rendering time will be twice better (on average)
- the final render won't change (since another, front-facing face will be drawn on top of a "culled" one)
So you basically get x2 perf for free.
In order to know whether the triangle is front- or back-facing, you take v0-v1 and v0-v2, make a cross product. This gives you the face normal. If this vector is towards you ( dot(normal, viewVector) < 0), draw.
Triangles have their coordinates specificed in a specific order, clockwise IIRC.
When the graphics engine look at a triangle from a specific direction, and the coordinates are counter-clockwise, it knows that it's looking at the backside of the triangle through an object. As the front side of the object is covering the triangle, it doesn't have to be drawn.