opengl 3.3 z-fighting ortho 2d view - opengl

I'm having some issues with z fighting while drawing simple 2d textured quads using opengl. The symptoms are both objects moving at the same speed and one on top of another but periodically one can see through the other and vice versa - sort of like a "flickering". I assume this is indeed z fighting.
I have turned off Depth Testing and have the following as well:
gl.Disable(gl.DEPTH_TEST)
gl.DepthFunc(gl.LESS)
gl.Enable(gl.BLEND)
gl.BlendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
My view and ortho matrices are as follows:
I have tried to set the near and far distances much greater ( like range of 50000 but still no help)
Projection := mathgl.Ortho(0.0, float32(width), float32(height), 0.0, -5.0, 5.0)
View := mathgl.LookAt(0.0, 0.0, 5.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0)
The only difference with my opengl process is that instead of a drawelements call for each individual object, I package all vertices, uvs(sprite atlas), translation, rotation, etc in one big package sent to vertex shader.
Does anyone have remedies for 2d z fighting?
edit:
i'm adding some pictures to further describe the scenario:
These images are taken a few seconds apart from each other. They are simply texture moving from left to right. As they move; you see from the image, that one sprite over-lapse the other and vice versa back and forth etc very fast.
Also note that my images (sprites) are pngs that have a transparent background to them..

It definitely isn't depth fighting if you have depth testing disabled as shown in the code snippet.
"I package all vertices, uvs(sprite atlas), translation, rotation, etc in one big package sent to vertex shader." - You need to look into the order that you add your sprites. Perhaps it's inconsistent for some reason.

This could be Z fighting
the usual causes are:
fragments are at the same Z-coordinate or closer then accuracy of Z-coordinate
fragments are too far from perspective camera with perspective projection the more far you are from Z near the less accuracy
some ways to fix this:
change size/position of overlapped surfaces slightly
use more bits for Z-Buffer (Depth)
use linear or logarithmic Z-buffer
increase Z-near or decrease Z-far or both for perspective projection you can combine more frustrums to get high definition Z range
sometimes helps to use glDepthFunc(GL_LEQUAL)
This could be an issue with Blending.
as you use Blending you need to render a bit differently. To render transparency correctly you must Z-sort the scene otherwise artifacts can occur. If you got too much dense geometry of transparent objects or objects near them (many polygon edges near). In addition Z-fighting creates a magnitude higher artifacts with blending.
some ways to fix this:
Z sorting can be partially done by multi pass rendering + Depth test + switching front face
so first render all solids and then render Z-sorted transparent objects with front face set to the side not facing camera. Then render the same objects with front face set for side facing camera. You need to use depth test for this!!!. This way you do not need to sort all polygons of scene just the transparent objects. Results are not 100% correct for complex transparent geometries but the results are usually good enough (especially for dynamic scenes). This is how the output from this looks like
it is a glass cup a bit messed up visually by selected blending function for this case because darker pixels means 2 layers of glass on purpose it is not a bug. Therefore the opening looks like the front/back faces are swapped
use less dense geometry for transparent objects
get rid of Z-fighting issues

Related

OpenGL sheared near clipping plane

I have four arbitrary points (lt,rt,rb,lb) in 3d space and I would like these points to define my near clipping plane (lt stands for left-top, rt for right-top and so on).
Unfortunately, these points are not necessarily a rectangle (in screen space). They are however a rectangle in world coordinates.
The context is that I want to have a mirror surface by computing the mirrored world into a texture. The mirror is an arbitary translated and rotated rectangle in 3d space.
I do not want to change the texture coordinates on the vertices, because that would lead to ugly pixelisation when you e.g. look at the mirror from the side. When I would do that, also culling would not work correctly which would lead to huge performance impacts in my case (small mirror, huge world).
I also cannot work with the stencil buffer, because in some scenarios I have mirrors facing each other which would also lead to a huge performance drop. Furthermore, I would like to keep my rendering pipeline simple.
Can anyone tell me how to compute the according projection matrix?
Edit: Of cause I already have moved my camera accordingly. That is not the problem here.
Instead of tweaking the projection matrix (which I don't think can be done in the general case), you should define an additional clipping plane. You do that by enabling:
glEnable(GL_CLIP_DISTANCE0);
And then set gl_ClipDistance vertex shader output to be the distance of the vertex from the mirror:
gl_ClipDistance[0] = dot(vec4(vertex_position, 1.0), mirror_plane);

OpenGL Fog does not appear

I wanted to create a coordinate system with some lines in it, and wanted to display one window with depth-fog.
My "fog-code" looks like this:
glEnable(GL_FOG);
float fogColor[4] = {0.8, 0.8, 0.8, 1};
glFogi(GL_FOG_MODE, GL_LINEAR);
glFogfv(GL_FOG_COLOR, fogColor);
glFogf(GL_FOG_DENSITY,0.8);
glHint(GL_FOG_HINT, GL_NICEST);
glFogf(GL_FOG_START,0.1);
glFogf(GL_FOG_END,200);
and is placed in my main function (don't know yet if this could cause any problems, but just to be sure), right after the init()-call and before my display-function-call.
Update:
The problem was actually really simple: My problem was, that I worked solely on the GL_MODELVIEW-matrix, thinking there was no real difference to the GL_PROJECTION-matrix. According to this article and the post from Reto Koradi, there is a pretty significant difference. I hugely recommend reading the full article to better understand the system behind OpenGL (definitely helped me a lot).
The corrected code (for my init()-call) would then be:
void init2()
{
glClearColor (1.0, 1.0, 1.0, 0.0); // set background color to white
glMatrixMode(GL_PROJECTION); // switch to projection mode
glLoadIdentity(); // initialize a projection matrix
glOrtho(-300, 300, -300, 300, -800, 800); // map coordinates to the viewport
gluLookAt(2,2,10, 0,0,-0.5, 0,1,0);
glMatrixMode(GL_MODELVIEW); // now switch to modelview mode
}
The fog equation is evaluated based on the value of (quote from OpenGL 2.1 spec):
Otherwise, if the fog source is FRAGMENT DEPTH, then c is the eye-coordinate distance from the eye, (0,0,0,1) in eye coordinates, to the fragment center.
FRAGMENT_DEPTH is the default, so this applies in your case. Eye coordinate refers to the coordinates after the model-view transformation has been applied. So it's the distance from the origin after applying the model-view transform. The spec also allows implementations to use the absolute value of the z-coordinate instead of the distance from the origin.
One small observation on your code: GL_FOG_DENSITY does not matter if the mode is GL_LINEAR. It is only used for the exponential modes.
For GL_LINEAR mode, the behavior is pretty much as you would expect. The original fragment color is linearly blended with the fog color within the range GL_FOG_START to GL_FOG_END. So everything smaller than GL_FOG_START has the original fragment color, everything after GL_FOG_END has the fog color, and the values in between are linear interpolations between the two, with gradually more fog color and less original fragment color.
To get good results, you'll have to play with the GL_FOG_START and GL_FOG_END values. If you don't get as much for as desired, you can start by reducing the value of GL_FOG_END.
I peeked at the linked code, and noticed one problem: You're specifying the projection matrix while you're in GL_MODELVIEW matrix mode. You need to be careful that you specify the matrices in the correct matrix mode, which is GL_PROJECTION for the projection matrix.
Mixing up the matrix modes does not have an adverse effect on the resulting vertex coordinates, since both the model-view and projection matrices are applied to the vertices. So for very simple use, you can sometimes get away with using the wrong mode. But once lighting comes into play, it is critical to use the correct matrix mode, since lighting calculations are done after the model-view transformation has been applied, but before the projection transformation.
And yes, as others already pointed out, a lot of this actually gets simpler if you write your own shaders. The fact that I quoted the OpenGL 2.1 spec is probably a hint that this functionality is old and obsolete.
Like to many things that OpenGL-1.1 did, fog is calculated on a per vertex level. So if you have a long line, with only two points, fog is calculated only for the end points and then the color interpolated linear inbetween. Depending on how your line is aligned and which shading mode you use, this may result in no apparent fogging.
Two solutions:
Subdivide the lines into a couple of dozen line segments, so to sample the fog at more than two points.
or
Use a fragment shader instead and calculate the fog term therein. This is what I suggest doing.

OpenGL: Drawing very thin triangles with TriangleList turn into points

I'm using TriangleList to output my primitives. Most all of the time I need to draw rectangles, triangles, circles. From time to time I need to draw very thin triangles (width=2px for example). I thought it should look like a line (almost a line) but it looks like separate points :)
Following picture shows what I'm talking about:
First picture at the left side shows how do I draw a rectangle (counter clockwise, from top right corner). And then you can see the "width" of the rectangle which I call "dx".
How to avoid this behavior? I would it looks like a straight (almost straight) line, not as points :)
As #BrettHale mentions, this is an aliasing problem. For example,
Without super/multisampling, the triangle only covers the centre of the bottom right pixel and only it will receive colour. Real pixels have area and in a perfect situation, would receive a portion of the colour equal to the area covered. "Antialiasing" techniques reduce aliasing effects caused by not integrating colour across pixels.
Getting it to look right without being incredibly slow is hard. OpenGL provides GL_POLYGON_SMOOTH, which conservatively rasterizes triangles and draws the correct percentages of colour to each pixel using blending. This works well until you have overlapping triangles and you hit the problem of transparency sorting where order-independent transparency is needed. A simple and more brute force solution is to render to a much bigger texture and then downsample. This is essentially what supersampling does, except the samples can be "anisotropic" (irregular) which gives a nicer result. Multisampling techniques are adaptive and a bit more efficient, e.g. supersample pixels only at triangle edges. It is fairly straightforward to set this up with OpenGL.
However, as the triangle area approaches zero the area will too and it'll still disappear entirely even with antialiasing (although will fade out rather than become pixelated). Although not physically correct, you may instead be after a minimum 1-pixel width triangle so you get the lines you want even if it's a really thin triangle. This is where doing your own conservative rasterization may be of interest.
This is the problem of skinny triangles in general. For example, in adaptive subdivision when you have skinny T-junctions, it happens all the time. One solution is to draw the edges (you can use GL_LINE_STRIP) with having antialiasing effect on You can have:
Gl.glShadeModel(Gl.GL_SMOOTH);
Gl.glEnable(Gl.GL_LINE_SMOOTH);
Gl.glEnable(Gl.GL_BLEND);
Gl.glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA);
Gl.glHint(Gl.GL_LINE_SMOOTH_HINT, Gl.GL_DONT_CARE);
before drawing the lines so you get lines when your triangle is very small...
This is called a subpixel feature, when geometry gets smaller than a single pixel. If you animated the very thin triangle, you would see the pixels pop in and out.
Try turning multi-sampling on. Most GL windowing libraries support multisampled back buffer. You can also force it on in your graphics driver settings.
If the triangle is generated by geometry shader, then you can make the triangle area dynamic.
For example, you can make the triangle width always greater than 1px.
// ndc coord is range from -1.0 to 1.0 and the screen width is 1920.
float pixel_unit = 2.0 / 1920.0;
vec2 center = 0.5 * (triangle[0].xy + triangle[1].xy );
// Remember to divide the w component.
float triangle_width = (triangle[0].xy - center)/triangle[0].w;
float scale_ratio = pixel_unit / triangle_width;
if (scale_ratio > 1.0){
triagle[0].xy = (triangle[0].xy - center) * scale_ratio + center;
triagle[1].xy = (triangle[1].xy - center) * scale_ratio + center;
}
This issue can also be addressed via conservative rasterisation. The following summary is reproduced from the documentation for the NV_conservative_raster OpenGL extension:
This extension adds a "conservative" rasterization mode where any pixel
that is partially covered, even if no sample location is covered, is
treated as fully covered and a corresponding fragment will be shaded.
Similar extensions exist for the other major graphics APIs.

transparency in opengl (using FLTK)

I'm drawing some 3D structures in a Fl_Gl_Window in FLTK's implementation of opengl. This images are drawn and rotated so the code looks something like
glTranslatef(-xshift,-yshift,-zshift);
glRotatef(ang1,ang2,ang3);
glTranslatef(xshift,yshift,zshift);
glColor4f((120.0/256.0),(120.0/256.0),(120.0/256.0),0.2);
for (int side=0;side<num_sides;side++){
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable( GL_BLEND );
glBegin(GL_TRIANGLES);
//draw shape
glEnd();
glDisable(GL_BLEND);
}
and it almost works apart from at different angles the transparency doesn't work properly. For example, if I draw a cube from one side it will look transparent all the way through without being able to discern the two sides but from the other one side will appear darker as it is supposed to. It's as if it calculates the transparency too 'early' as in before the rotation. Am I doing something wrong? Should I move the rotation to below the transparency effects (i.e. before them in execution) or does the order of the triangles matter?
The order of the triangles matters. To get the desired effect for transparency you need to render the triangles in back to front order because the hardware blending works by reading the color for the fragment in the depth buffer and blending it with the fragment currently being shaded. That's why you are getting different results when you rotate your cube since you are not changing the order of the triangles in the cube. You may also want to look into Order Independent Transparency techniques.
Depending on how many triangles you have sorting them every frame can get really expensive. One approximation technique is to presort the triangles along the x, y, and z axes and then choose the sorted ordered that most closely matches your viewing direction. This only works to a certain extent. One popular type of order independent transparency technique is depth peeling. Here's a tutorial with some code for implementing it: http://mmmovania.blogspot.com/2010/11/order-independent-transparency.html?m=1. You might also want to read the original paper to get a better understanding of the technique: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.18.9286&rep=rep1&type=pdf.

Deferred Lighting | Point Lights Using Circles

I'm implementing a deferred lighting mechanism in my OpenGL graphics engine following this tutorial. It works fine, I don't get into trouble with that.
When it comes to the point lights, it says to render spheres around the lights to only pass those pixels throught the lighting shader, that might be affected by the light. There are some Issues with that method concerning cullface and camera position precisely explained here. To solve those, the tutorial uses the stencil-test.
I doubt the efficiency of that method which leads me to my first Question:
Wouldn't it be much better to draw a circle representing the light-sphere?
A sphere always looks like a circle on the screen, no matter from which perspective you're lokking at it. The task would be to determine the screenposition and -scaling of the circle. This method would have 3 advantages:
No cullface-issue
No camereposition-in-lightsphere-issue
Much more efficient (amount of vertices severely reduced + no stencil test)
Are there any disadvantages using this technique?
My second Question deals with implementing mentioned method. The circles' center position could be easily calculated as always:
vec4 screenpos = modelViewProjectionMatrix * vec4(pos, 1.0);
vec2 centerpoint = vec2(screenpos / screenpos.w);
But now how to calculate the scaling of the resulting circle?
It should be dependent on the distance (camera to light) and somehow the perspective view.
I don't think that would work. The point of using spheres is they are used as light volumes and not just circles. We want to apply lighting to those polygons in the scene that are inside the light volume. As the scene is rendered, the depth buffer is written to. This data is used by the light volume render step to apply lighting correctly. If it were just a circle, you would have no way of knowing whether A and C should be illuminated or not, even if the circle was projected to a correct depth.
I didn't read the whole thing, but i think i understand general idea of this method.
Won't help much. You will still have issues if you move the camera so that the circle will be behind the near plane - in this case none of the fragments will be generated, and the light will "disappear"
Lights described in the article will have a sharp falloff - understandably so, since sphere or circle will have sharp border. I wouldn-t call it point lightning...
For me this looks like premature optimization... I would certainly just be rendering whole screenquad and do the shading almost as usual, with no special cases to worry about. Don't forget that all the manipulations with opengl state and additional draw operations will also introduce overhead, and it is not clear which one will outscale the other here.
You forgot to do perspective division here
The simplest way to calculate scaling - transform a point on the surface of sphere to screen coords, and calculate vector length. It mst be a point on the border in screen space, obviously.