I was planning on using gl_ClipDistance in my vertex shader until I realized my project is OpenGL 2.1/GLSL 1.2 which means gl_ClipDistance is not available.
gl_ClipVertex is the predecessor to gl_ClipDistance but I can find very little information about how it works and, especially relative to gl_ClipDistance, how it is used.
My goal is to clip the intersection of clipping planes without the need for multiple rendering passes. In the above referenced question, it was suggested that I use gl_ClipDistance. Examples like this one are clear to me, but I don't see how to apply it to gl_ClipVertex.
How can I use gl_ClipVertex for the same purpose?
When in doubt, you should always examine the formal GLSL specification. In particular, since this pre-declared variable was introduced in GLSL 1.3 you know (or should assume) that there will be a discussion of the deprecation of the old technique and the implementation of the new one.
In fact, there is if you look here:
The OpenGL® Shading Language 1.3 - 7.1 Vertex Shader Special Variables - pp. 60-61
The variable gl_ClipVertex is deprecated. It is available only in the vertex language and provides a place for vertex shaders to write the coordinate to be used with the user clipping planes. The user must ensure the clip vertex and user clipping planes are defined in the same coordinate space. User clip planes work properly only under linear transform. It is undefined what happens under non-linear transform.
Further investigation of the actual types used for both should also give a major hint as to the difference between the two:
out float gl_ClipDistance[]; // may be written to
out vec4 gl_ClipVertex; // may be written to, deprecated
You will notice that gl_ClipVertex is a full-blown positional (4 component) vector, where as gl_ClipDistance[] is simply an array of floating-point numbers. What you may not notice is that gl_ClipDistance[] is an input/output for geometry shaders and an input to fragment shaders, where as gl_ClipVertex only exists in vertex shaders.
The clip vertex is the position used for clipping, where as clip distance is the distance from each clipping plane (which you are allowed to calculate yourself). The ability to specify the distance arbitrarily for each clipping plane allows for non-linear transformations as discussed above, prior to this all you could do is set the location used to compute the distance from each clipping plane.
To put this all in perspective:
The calculation of clipping from the clip vertex used to occur as part of the fixed-function pipeline between vertex transformation and fragment shading. When GLSL 1.3 was introduced Shader Model 4.0 had already formally been defined by DX10 for a number of years, which exposed programmable primitive assembly and logically more flexible computation of clipping. We did not get geometry shaders until GLSL 1.5, but many other parts of Shader Model 4.0 were gradually introduced between 1.3 and 1.5
Related
What is difference between gl clipdistance and clipplane?
Which will be effective for clipping? Is clip plane is contrained to normalised device coordinates ie. From -0.1 to 1.0.
Assuming you are refering to the OpenGL method glClipPlane and the glsl variable gl_ClipDistance: Those two are not directly related.
glClipPlane controls clipping planes in the fixed-function pipeline and is deprecated.
gl_ClipDistance is the modern version and is set from inside the shader. It has to contain the distance of the current vertex to a clip-plane. OpenGL does in this case not know anything about the clipplane itself, since the only relevant value are the distances to this planes.
The values of the plane (in both cases) are technically not constrained to any range, but in practice only planes intersecting the [-1, 1] cube will have any effect, since clipping against the unit cube still happens.
I have a GLSL shader that draws a 3D curve given a set of Bezier curves (3d coordinates of points). The drawing itself is done as I want except the occlusion does not work correctly, i.e., under certain viewpoints, the curve that is supposed to be in the very front appears to be still occluded, and reverse: the part of a curve that is supposed to be occluded is still visible.
To illustrate, here are couple examples of screenshots:
Colored curve is closer to the camera, so it is rendered correctly here.
Colored curve is supposed to be behind the gray curve, yet it is rendered on top.
I'm new to GLSL and might not know the right term for this kind of effect, but I assume it is occlusion culling (update: it actually indicates the problem with depth buffer, terminology confusion!).
My question is: How do I deal with occlusions when using GLSL shaders?
Do I have to treat them inside the shader program, or somewhere else?
Regarding my code, it's a bit long (plus I use OpenGL wrapper library), but the main steps are:
In the vertex shader, I calculate gl_Position = ModelViewProjectionMatrix * Vertex; and pass further the color info to the geometry shader.
In the geometry shader, I take 4 control points (lines_adjacency) and their corresponding colors and produce a triangle strip that follows a Bezier curve (I use some basic color interpolation between the Bezier segments).
The fragment shader is also simple: gl_FragColor = VertexIn.mColor;.
Regarding the OpenGL settings, I enable GL_DEPTH_TEST, but it does not seem to have anything of what I need. Also if I put any other non-shader geometry on the scene (e.g. quad), the curves are always rendered on the top of it regardless the viewpoint.
Any insights and tips on how to resolve it and why it is happening are appreciated.
Update solution
So, the initial problem, as I learned, was not about finding the culling algorithm, but that I do not handle the calculation of the z-values correctly (see the accepted answer). I also learned that given the right depth buffer set-up, OpenGL handles the occlusions correctly by itself, so I do not need to re-invent the wheel.
I searched through my GLSL program and found that I basically set the z-values as zeros in my geometry shader when translating the vertex coordinates to screen coordinates (vec2( vertex.xy / vertex.w ) * Viewport;). I had fixed it by calculating the z-values (vertex.z/vertex.w) separately and assigned them to the emitted points (gl_Position = vec4( screenCoords[i], zValues[i], 1.0 );). That solved my problem.
Regarding the depth buffer settings, I didn't have to explicitly specify them since the library I use set them up by default correctly as I need.
If you don't use the depth buffer, then the most recently rendered object will be on top always.
You should enable it with glEnable(GL_DEPTH_TEST), set the function to your liking (glDepthFunc(GL_LEQUAL)), and make sure you clear it every frame with everything else (glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)).
Then make sure your vertex shader is properly setting the Z value of the final vertex. It looks like the simplest way for you is to set the "Model" portion of ModelViewProjectionMatrix on the CPU side to have a depth value before it gets passed into the shader.
As long as you're using an orthographic projection matrix, rendering should not be affected (besides making the draw order correct).
The more I read the more confused I become. I am trying to learn getting from old opengl1 fixed pipeline to modern gl. I learned a lot already but for one thing I am still unsure about. In old tutorials its just used as gl_Normal, in newer its often referred to vnormal or v_normal.
In older versions I didn't have to take care about that, also in fixed pipeline it seems to be provided automatically. So where to get this or rather, how to calculate it? Must it be done in c++ or can it be done in vertex or fragment shader as well from vert position (in old tutorials referred as gl_Vertex)?
A sample or pseudo code would be nice.
Normal never came automatically. Even with fixed pipeline, you had to provide normals yourself.
gl_Normal was pre-defined vertex shader attribute that came from glNormalPointer. In latest GLs (not sure about actual version, probably 4.*) these functions was deprecated so all attributes have to come from glVertexAttribPointer - no predefined attributes, programmer have to bind every array to attribute location himself.
So normal, or whatever it is called - is just named attribute. You have to get its location (with glGetAttribLocation) and assign array containing vertex normals (normals to sufrace at the point of specified vertex) to that location.
As for calculating normals - it is trivial for flat surfaces (just a cross-product of two triangle edges), but for smooth shading - normals have to be interpolated between nearest polygons. It is usually done in 3D mesh editors and just exported to file.
I'd like to set up an orthographic projection using only modern OpenGL techniques (i.e. no immediate-mode stuff). I'm seeing conflicting info on the web about how to approach this.
Some people are saying that it's still OK to call glMatrixMode(GL_PROJECTION) and then glOrtho. This has always worked for me in the past, but I'm wondering if this has been deprecated in modern OpenGL.
If so, are vertex shaders the standard way to do an orthographic projection nowadays? Does GLSL provide a handy built-in function to set up an orthographic projection, or do I need to write that math myself in the vertex shader?
If so, are vertex shaders the standard way to do an orthographic projection nowadays?
Not quite. Vertex shaders perform the calculations, but the transformation matrices are usually fed into the shader through a uniform. A shader should only evaluate things, that vary with each vertex. Implementing a "ortho" function, the returns a projection matrix in GLSL is counterproductive.
I'd like to set up an orthographic projection using only modern OpenGL techniques (i.e. no immediate-mode stuff). I'm seeing conflicting info on the web about how to approach this.
The matrix stack of OpenGL before version 3 has nothing to do with the immediate mode. Immediate mode was glBegin(…); for(…){ ...; glVertex(…);} glEnd(). And up to OpenGL version 3 is was rather common to use the matrices specified through the matrix stack in a vertex shader.
With OpenGL-3 it was aimed to do, what was originally planned for OpenGL-2: A unification of the API and removing old cruft. One of those things removed was the matrix stack. Many shaders already used more than the standard matrices already, anyway (like for example for skeletal animation), so matrices already had been passed as uniforms and the programs did already contain the whole matrix math code. Taking the step of remocing the matrix math stuff from OpenGL-3 was just the next logical step.
In general, you do not compute matrices in shaders. You compute them on the CPU and upload them as uniforms to the shaders. So your vertex shader would neither know nor care if it is for an orthographic projection or a perspective one. It just transforms by whatever projection matrix it is given.
When you target an OpenGL version > 3.0, glOrtho has been removed from the official specs (but still is present in the compatability profile), so you shouldn't use it anymore. Therefore, you will have to calculate the projection you want to use directly within the vertex shader
(for OpenGL up to 3.0 glOrtho is perfectly ok ;).
And no there is no "handy" function to get the projection matrix in GLSL, so you have to specify it yourself. This is, however, no problem, as there is a) plenty of example code in the web and b) the equations are right in the OpenGL spec, so you can take it simply from there if you want to.
http://www.opengl.org/wiki/Rendering_Pipeline_Overview says that "primitives that lie on the boundary between the inside of the viewing volume and the outside are split into several primitives" after the geometry shader is run and before fragments are rasterized. Everything else I've ever read about OpenGL has also described the clipping process the same way. However, by setting gl_FragDepth in the fragment shader to values that are closer to the camera than the actual depth of the point on the triangle that generated it (so that the fragment passes the depth test when it would have failed if I were copying fixed-pipeline functionality), I'm finding that fragments are being generated for the entire original triangle even if it partially overlaps the far viewing plane. On the other hand, if all of the vertices are behind the plane, the whole triangle is clipped and no fragments are sent to the fragment shader (I suppose more technically you would say it is culled, not clipped).
What is going on here? Does my geometry shader replace some default functionality? Are there flags/hints I need to set or additional built-in variables that I need to write to in order for the next step of the rendering pipeline to know how to do partial clipping?
I'm using GLSL version 1.2 with the GL_EXT_geometry_shader4 extension on an NVIDIA GeForce 9400M.
That sounds like a driver bug. If you can see results for fragments that should have been outside the viewing region (ie: if turning off your depth writes causes the fragments to disappear entirely), then that's against the spec's behavior.
Granted, it's such a corner case that I doubt anyone's going to do anything about it.
Most graphics hardware tries as hard as possible to avoid actually clipping triangles. Clipping triangles means potentially generating 3+ triangles from a single triangle. That tends to choke the pipeline (pre-tessellation at any rate). Therefore, unless the triangle is trivially rejectable (ie: outside the clip box) or incredibly large, modern GPUs just ignore it. They let the fragment culling hardware take care of it.
In this case, because your fragment shader is a depth-writing shader, it believes that it can't reject those fragments until your fragment shader has finished.
Note: I realized that if you turn on depth clamping, that turns off near and far clipping entirely. Which may be what you want. Depth values written from the fragment shader are clamped to the current glDepthRange.
Depth clamping is an OpenGL 3.2 feature, but NVIDIA has supported it for near on a decade with NV_depth_clamp. And if your drivers are recent, you should be able to use ARB_depth_clamp even if you don't get a 3.2 compatibility context.
If I understood you correctly, you wonder that your triangles aren't clipped against the far plane.
Afaik OpenGL just clips against the 4 border planes after the vertex assembly. The far and near clipping gets done (by spec afaik) after the fragment shader. Ie when you zoom in extremely and polygons collide with the near plane they get rendered to that point and don't pop away as a whole.
And I don't think that the specs note splitting primitives at all (even when the hw might do that in screenspace ignoring fragdepth), it just notes skipping primitives as a whole (in the case that none vertex lies in the view frustum).
Also relying on a wiki for word-exact rules is always a bad idea.
PS: http://fgiesen.wordpress.com/2011/07/05/a-trip-through-the-graphics-pipeline-2011-part-5/ explains the actual border and near&far clipping very good.