What is difference between gl clipdistance and clipplane?
Which will be effective for clipping? Is clip plane is contrained to normalised device coordinates ie. From -0.1 to 1.0.
Assuming you are refering to the OpenGL method glClipPlane and the glsl variable gl_ClipDistance: Those two are not directly related.
glClipPlane controls clipping planes in the fixed-function pipeline and is deprecated.
gl_ClipDistance is the modern version and is set from inside the shader. It has to contain the distance of the current vertex to a clip-plane. OpenGL does in this case not know anything about the clipplane itself, since the only relevant value are the distances to this planes.
The values of the plane (in both cases) are technically not constrained to any range, but in practice only planes intersecting the [-1, 1] cube will have any effect, since clipping against the unit cube still happens.
Related
I have four arbitrary points (lt,rt,rb,lb) in 3d space and I would like these points to define my near clipping plane (lt stands for left-top, rt for right-top and so on).
Unfortunately, these points are not necessarily a rectangle (in screen space). They are however a rectangle in world coordinates.
The context is that I want to have a mirror surface by computing the mirrored world into a texture. The mirror is an arbitary translated and rotated rectangle in 3d space.
I do not want to change the texture coordinates on the vertices, because that would lead to ugly pixelisation when you e.g. look at the mirror from the side. When I would do that, also culling would not work correctly which would lead to huge performance impacts in my case (small mirror, huge world).
I also cannot work with the stencil buffer, because in some scenarios I have mirrors facing each other which would also lead to a huge performance drop. Furthermore, I would like to keep my rendering pipeline simple.
Can anyone tell me how to compute the according projection matrix?
Edit: Of cause I already have moved my camera accordingly. That is not the problem here.
Instead of tweaking the projection matrix (which I don't think can be done in the general case), you should define an additional clipping plane. You do that by enabling:
glEnable(GL_CLIP_DISTANCE0);
And then set gl_ClipDistance vertex shader output to be the distance of the vertex from the mirror:
gl_ClipDistance[0] = dot(vec4(vertex_position, 1.0), mirror_plane);
As far as I understand, location of a point/pixel cannot be a fraction, at least on a raster graphics system where hardwares use pixels to display images.
Then, why and how does OpenGL use fractional values for plotting pixels?
For example, how is it possible: glVertex2f(0.15f, 0.51f); ?
This command does not plot any pixels. It merely defines the location of a point in 3D space (you'll notice that there are 3 coordinates, while for a pixel on the screen you'd only need 2). This is the starting point for the OpenGL pipeline. This point then goes through a lot of transformations before it ends up on the screen.
Also, the coordinates are unitless. For example, you can say that your viewport is between 0.0f and 1.0f, then these coordinates make a lot of sense. Basically you have to think of these point in terms of mathematics, not pixels.
I would suggest some reading on how OpenGL transformations work, for example here, here or the tutorial here.
The vectors you pass into OpenGL are not viewport positions but arbitrary numbers in some vector space. Only after a chain of transformations these numbers are mapped into viewport pixel positions. With the old fixed function pipeline this could be anything that can be represented by a vector–matrix multiplication.
These days, where everything is programmable (shaders) the mapping can very well be any kind of function you can think of. For example the values you pass into glVertex (immediate mode call, but available to shaders with OpenGL-2.1) may be interpreted as polar coordinates in the vertex shader:
This is a perfectly valid OpenGL-2.1 vertex shader that interprets the vertex position to be in polar coordinates. Note that due to triangles and lines being straight edges and polar coordinates being curvilinear this gives good visual results only for points or highly tesselated primitives.
#version 110
void main() {
gl_Position =
gl_ModelViewProjectionMatrix
* vec4( gl_Vertex.y*vec2(sin(gl_Vertex.x),cos(gl_Vertex.x)) , 0, 1);
}
As you can see here the valus passed to glVertex are actually arbitrary, unitless components of vectors in some vector space. Only by applying some transformation to the viewport space these vectors gain meaning. Hence it makes no way to impose a certain value range onto the values that go into the vertex attribute.
Vertex and pixel are very different things.
It's quite possible to have all your vertices within one pixel (although in this case you probably need help with LODing).
You might want to start here...
http://www.glprogramming.com/blue/ch01.html
Specifically...
Primitives are defined by a group of one or more vertices. A vertex defines a point, an endpoint of a line, or a corner of a polygon where two edges meet. Data (consisting of vertex coordinates, colors, normals, texture coordinates, and edge flags) is associated with a vertex, and each vertex and its associated data are processed independently, in order, and in the same way.
And...
Rasterization produces a series of frame buffer addresses and associated values using a two-dimensional description of a point, line segment, or polygon. Each fragment so produced is fed into the last stage, per-fragment operations, which performs the final operations on the data before it's stored as pixels in the frame buffer.
For your example, before glVertex2f(0.15f, 0.51f) is on the screen, there are many transforms to be done. Making complex thing crudely simpler, after moving your vertex to view space (applying camera position and direction), the magic here is (1) projection matrix, and (2) viewport setting.
Internally, OpenGL "screen coordinates" are in a cube (-1, -1, -1) - (1, 1, 1), :
http://www.matrix44.net/cms/wp-content/uploads/2011/03/ogl_coord_object_space_cube.png
Projection matrix 'squeezes' the frustum in this cube (which you do in vertex shader), assuming you have perspective transform - if projection is orthogonal, the projection is just a tube, limited by near and far values (and like in both cases, scaling factors):
http://www.songho.ca/opengl/files/gl_projectionmatrix01.png
EDIT: Maybe better example here:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/#The_Projection_matrix
(EDIT: The Z-coordinate is used as depth value) When fragments are finally transferred to pixels on texture/framebuffer/screen, these are multiplied with viewport settings:
https://www3.ntu.edu.sg/home/ehchua/programming/opengl/images/GL_2DViewportAspectRatio.png
Hope this helps!
So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.
How can I render a plane on the surface of a glClipPlane clipping plane? The plane is rendered by drawing a polygon between a series of points that are located on the plane. Right now, it's producing some very fun stitching.
I assume glPolygonOffset won't help me here?
It is possible for me to just translate the plane a tiny bit over to one side of the plane, but I would prefer a simpler and more elegant solution, if one exists.
I assume you are not using GLSL? If you have GLSL 1.30, you can just set a vertex's gl_ClipDistance [N] >= 0.0 for clip plane N and the point will not be clipped. If you do that for every vertex in the polygon you want to span your plane, the polygon will not be clipped (against that plane anyway).
As for glPolygonOffset (...) that affects the depth computed during rasterization. It happens after clipping, and likewise clipping has nothing to do with polygon depth, so you are correct it will not help. You will have to translate your vertices before/during primitive assembly for this to work, so that either means performing translation in a vertex shader or using the fixed-function modelview matrix.
I was planning on using gl_ClipDistance in my vertex shader until I realized my project is OpenGL 2.1/GLSL 1.2 which means gl_ClipDistance is not available.
gl_ClipVertex is the predecessor to gl_ClipDistance but I can find very little information about how it works and, especially relative to gl_ClipDistance, how it is used.
My goal is to clip the intersection of clipping planes without the need for multiple rendering passes. In the above referenced question, it was suggested that I use gl_ClipDistance. Examples like this one are clear to me, but I don't see how to apply it to gl_ClipVertex.
How can I use gl_ClipVertex for the same purpose?
When in doubt, you should always examine the formal GLSL specification. In particular, since this pre-declared variable was introduced in GLSL 1.3 you know (or should assume) that there will be a discussion of the deprecation of the old technique and the implementation of the new one.
In fact, there is if you look here:
The OpenGL® Shading Language 1.3 - 7.1 Vertex Shader Special Variables - pp. 60-61
The variable gl_ClipVertex is deprecated. It is available only in the vertex language and provides a place for vertex shaders to write the coordinate to be used with the user clipping planes. The user must ensure the clip vertex and user clipping planes are defined in the same coordinate space. User clip planes work properly only under linear transform. It is undefined what happens under non-linear transform.
Further investigation of the actual types used for both should also give a major hint as to the difference between the two:
out float gl_ClipDistance[]; // may be written to
out vec4 gl_ClipVertex; // may be written to, deprecated
You will notice that gl_ClipVertex is a full-blown positional (4 component) vector, where as gl_ClipDistance[] is simply an array of floating-point numbers. What you may not notice is that gl_ClipDistance[] is an input/output for geometry shaders and an input to fragment shaders, where as gl_ClipVertex only exists in vertex shaders.
The clip vertex is the position used for clipping, where as clip distance is the distance from each clipping plane (which you are allowed to calculate yourself). The ability to specify the distance arbitrarily for each clipping plane allows for non-linear transformations as discussed above, prior to this all you could do is set the location used to compute the distance from each clipping plane.
To put this all in perspective:
The calculation of clipping from the clip vertex used to occur as part of the fixed-function pipeline between vertex transformation and fragment shading. When GLSL 1.3 was introduced Shader Model 4.0 had already formally been defined by DX10 for a number of years, which exposed programmable primitive assembly and logically more flexible computation of clipping. We did not get geometry shaders until GLSL 1.5, but many other parts of Shader Model 4.0 were gradually introduced between 1.3 and 1.5