Z value bigger than 1 after w division - opengl

What I am doing in vertex shader is:
shadowCoord = shadowVP * mMatrix * vec4(vertex_position,1.0);
Now to get it back in the range [-1, 1] I did this in the fragment shader:
vec3 proj = shadowCoord.xyz / shadowCoord.w;
But if I test the z value of such point I get a value bigger than 1.
The perspective matrix I use is obtained via:
glm::perspective(FOV, aspectRatio, near, far);
And it results in:
[2.4142 0 0 0
0 2.4142 0 0
0 0 -1.02 -1
0 0 -0.202 0]
and the shadowVP is:
shadow_Perp * shadow_View
Shouldn't proj.z be in the range [-1,1]?

Shouldn't proj.z be in the range [-1,1]?
No. It is in the range [-1,1] if the point lies inside the frustum. And the frustum is defined as -w <= x,y,z <= w for any vetrex in clip space (and that w varies per vertex). But you don't do any clipping, so any value can result here. Note two things:
While I said the implication "v inside the frustum" => "NDC coords in [-1,1]" holds true, the opposite does not. That means you can get the NDC coords inside [-1,1] for points which lie outside of the frusutm (that might even lie behind the "viewing position").
You might also get the division by 0 here.

Related

Jagged lines while handling UV values out of range [0,1]

I am parsing an OBJ which has texture coordinates more than 1 and less than 0 as well. I then write it back by making the UV values in the range [0,1]. Based on the understanding from another question on SO, am doing the conversion to the range [0,1] as follows.
if (oldU > 1.0) or (oldU < 0.0):
oldU = math.modf(oldU)[0] # Returns the floating part
if oldU < 0.0 :
oldU = 1 + oldU
if (oldV > 1.0) or (oldV < 0.0):
oldV = math.modf(oldV)[0] # Returns the floating part
if oldV < 0.0:
oldV = 1 + oldV
But I see some jagged lines in my output obj file and the original obj file when rendered in some software:
Original
Restricted to [0,1]
This may work not as you've expected.
Given some triangle edge that starts at U=0.9 and ends at U=1.1 then after your UV clipping you'll get start at 0.9 but end at 0.1 so the triangle will use different part of your texture. I believe this happens at the bottom of your mesh.
In general there's no problem with using UV outside of 0-1 range so first try to render the mesh as it is and see if you have any problems.
If you really want to move UVs to 0-1 range then scale and move UVs instead of clipping them per vertex. Iterate over all vertices and store min and max values for U and V, then scale UV for every vartex, so min becomes 0 and max becomes 1.

OpenGL sutherland-hodgman polygon clipping algorithm in homogeneous coordinates (4D, CCS)

I have two questions. (I marked 1, 2 below)
In OpenGl, the clipping is done by sutherland-hodgman.
However, I wonder how to work sutherland-hodgman algorithm in homogeneous system (4D)
I made a situation.
In VCS, there is a line, R= (0, 3, -2, 1), S = (0, 0, 1, 1) (End points of the line)
And a frustum is right = 1, left = -1, near = 1, far = 3, top = 4, bottom = -4
Therefore, the projection matrix P is
1 0 0 0
0 1/4 0 0
0 0 -2 -3
0 0 -1 0
If we calculate the line with the P, then the each end points is like that
R' = (0, 3/4, 1, 2), S' = (0, 0, -5, -1)
I know that perspective division should not be done now, because if we do perspective division, the clipping result is not correct.
Here I am curious. What makes a correct clipping because we did not just do perspective division. What mathematical properties are here?
How to calculate the clipping result in above situation?
(The fact that two intersections occur in the w-y coordinate system makes me confused. I thought the result line is one, not divided two parts)
I'm not quite sure whether you understood the sutherland-hodgman algorithm correctly (or at least I didn't get your example). Thus I will prove here, that it doesn't make any difference whether clipping happens before or after the perspective divide. The proof is only shown for one plane (clipping has to be done against all 6 planes), since applying multiple such clipping operations after each other makes not difference here.
Let's assume we have two points (as you described) R' and S' in clip space. And we have a clipping plane P given in hessian normal form [n, p] (if we take the left plane this is [1,0,0,1]).
If we would be calculating in pure 3d space (R^3), then checking whether a line crosses this plane would be done by calculating the signed distance of both points to the plane and checking if the sign is different. The signed distance for a point X = [x/w,y/w,z/w] is given by
D = dot(n, X) + p
Let's write down the actual equation we have (including the perspective divide):
d = n_x * x/w + n_y * y/w + n_z * z/w + p
In order to find the exact intersection point, we would, again in R^3 space, calculate for both points (A = R'/R'w, B = S'/S'w) the distance to the plane (da, db) and perform a linear interpolation (I will only write the equations for the x-coordinate here since y and z are working similar):
x = A_x * (1 - da/(da - db)) + A_y * (da/(da-db))
x = R'x/R'w * (1 - da/(da - db)) + S'x/S'w * (da/(da-db))
And w = 1 (since we interpolate between two points both having w = 1)
Now we already know from the previous discussion, that clipping has to happen before the perspective divide, thus we have to adapt this equation. This means, that for each point, the clipping cube has a different scaling w. Lt's see what happens when we try to perform the the same operations in P^3 (before the perspective divide):
First, we "revert" the perspective divide to get to X=[x,y,z,w] for which the distance to the plane is given by
d = n_x * x/w + n_y * y/w + n_z * z/w + p
d = (n_x * x + n_y * y + n_z * z) / w + p
d * w = n_x * x + n_y * y + n_z * z + p * w
d * w = dot([n, p], [x,y,z,w])
d * w = dot(P, X)
Since we are only interested in the sign of the whole calculation, which we haven't changed by our operations, we can compare the D*ws and get the same inside-out result as in R^3.
For the two points R' and S', the calculated distances in P^3 are dr = da * R'w and ds = db * S'w. When we now use the same interpolation equation as above but for R' and S' we get for x:
x' = R'x * (1 - (da * R'w)/(da * R'w - db * S'w)) + S'x * (da * R'w)/(da * R'w - db * S'w)
On the first view this looks rather different from the result we got in R^3, but since we are still in P^3 (thus x'), we still have to do the perspective divide on the result (this is allowed here, since the interpolated point will always be at the border of the view-frustum and thus dividing by w will not introduce any problems). The interpolated w component is given as:
w' = R'w * (1 - (da * R'w)/(da * R'w - db * S'w)) + S'w * (da * R'w)/(da * R'w - db * S'w)
And when calculating x/w we get
x = x' / w';
x = R'x/R'w * (1 - da/(da - db)) + S'x/S'w * (da/(da-db))
which is exactly the same result as when calculating everything in R^3.
Conclusion: The interpolation gives the same result, no matter if we perform the perspective divide first and interpolation afterwards or interpolating first and dividing then. But with the second variant we avoid the problem with points flipping from behind the viewer to the front since we are only dividing points that are guaranteed to be inside (or on the border) of the viewing frustum.
You speak of polygon clipping in a homogeneous system (4D) but from your question I assume that you actually mean homogeneous coordinates, which makes a lot more sense. (There are many possible homogenous systems.)
Ok, so you want to use "4D" coordinates, which are really "3D coordinates and a w term". The w term represents (projection transformations) the projective term that partially relates the screen-space coordinate to the original world space position. Assuming that you are NOT interested in projective space clipping, this term is not relevant.
I'm assuming this because the clipping box you describe is axis-aligned on planes in 3D. Even if it was rotated or scaled in 3D space, each of the planes would still be a 3D plane, the 4th coordinate always being '1'.
So how to clip:
clip line segment L against each of the planes of the clipping box, i.e. 6 clipping planes in total (you describe the normals of each clipping plane aptly), and see if any intersection point v is shared by the line and the tested plane P so that
v lies on the line segment (i.e. a t between 0 and 1)
v lies within the bounds of the plane P (i.e. the coordinate should not lie beyond any of the adjacent planes. Since you are using axis-aligned clipping planes, this is easy to check.)
Any of these intersections between a (3D + w) line and one of the 3D planes occurs in 3D, and intersection points have to be a 3D coordinates. You can extend each of these coordinates with a 4th w coordinate into a "4D" coordinate so that you can further transform them using 4x4 matrices for view and projection processing.

Why does graphics pipeline need mapping to clip coordinates and normalized device coordinates?

On perspective projection, if I use simple projection matrix like:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 1/near 0
, which is just projecting onto the image plane. It can be easily get view space coordinates by discarding and normalizing, I think.
If on orthogonal projection, it even does not need the projection matrix.
But, OpenGL graphics pipeline has the above process, though the perspective projection causes a depth precision error.
Why does it need mapping to clip coordinates and normalized device coordinates?
Added
If I use the above projection matrix,
1 0 0 0
p = ( 0 1 0 0 )
0 0 1 0
0 0 1/n 0
v_eye = (x y z 1)
v_clip = p * v_eye = (x y z z/n)
v_ndc = v_clip / v_clip.w = (nx/z ny/z n 1)
Then, v_ndc can be clipped by discarding values over top, bottom, left, right.
Values over far also can be clipped in the same way before multiplying the projection matrix.
Well, it looks like silly though, I think it's easier than before.
ps. I noticed that the depth buffer can't be written in this way. Then, can't it be written before the projection?
Sorry for silly question and gibberish...
In case of orthographic projections, you are right: The perspective divide is not required, but it des not introduce any error, since it is a division by 1. (A orthographic projection matrix contains always [0, 0, 0, 1] in the last row).
For perspective projection, this is a bit more complex:
Let's look at the simplest perspective projection:
1 0 0 0
P = ( 0 1 0 0 )
0 0 1 0
0 0 1 0
Then a vector v=[x,y,z,1] (in view space) gets projected to
v_p = P * v = [x, y, z, z],
which is in projektive space.
Now the perspectve divide is needed to get the perspectve effect (objects closer to the viewer look larger):
v_ndc = v / v.w = [x'/z y'/z, z'/z, 1]
I don't see how this could be achieved without the perspective divide.
Why does it need mapping to clip coordinates and normalized device coordinates?
The space where the programmer leaves the vertices to the GL to be taken care of is the clip space. It's the 4D homogeneous space where the vertices exist before normalization / perspective division. This division, useful to perform perspective projection, is the mapping needed to transform the vertices from clip space to NDC (3D). Why? Similar triangles.
View Space Point
*
/ |
Proj /- |
Y ^ Plane /-- |
| /-- |
| *-- |y
| /-- | |
| /-- |y' |
| /--- | |
<-----+------------+------------+-------
Z O |
|-----d------| |
|------------z------------|
Perspective projection is where rays from the eye/origin cuts through a projection plane hitting the points present in the space. The point where the ray intersects the plane is the projection of the point hit. Lets say we want to project point P on to the projection plane, where all points have z = d. The projected location of P i.e. P' needs to be found. We know that z' will be d (since projection planes lies there). To find y', we know
y ⁄ z = y' ⁄ z' (similar triangles)
y ⁄ z = y' ⁄ d (z' = d by defn. of proj. plane)
y' = (d * y) ⁄ z
This division by z is called the perspective division. This shows that in perspective projection, objects farther, with larger z, appear smaller and objects closer, will smaller z, appear larger.
Another thing which convenient to perform in clip space is, obviously, clipping. In 4D, clipping is which is just checking if the points lie within a range as opposed to the costlier division.
In case of orthographic projection, the projection isn't a frustum but a cuboid — parallel rays come from infinity and not the origin. Hence for point P = (x, y, z), the Z values are just dropped, giving P' = (x, y). Thus the perspective division does nothing (divides by 1) in this case.

Deriving a value for Normal Alignment

How could I derive a value for how aligned my normals are with a point in space?
For example if my normal faces directly away from point x, it will have a value of 1, whereas if it faces directly towards, it will have a value of 0, or something similar.
normalize(point - vertex) gives a direction vector from the vertex to the point
dot(normal - normalize(point - vertex)) gives the cosine of the angle between the vertex normal and this direction (1 when the same, -1 when opposite)
0.5 - 0.5 * dot(normal, normalize(point - vertex)) inverts and scales this to the 0 to 1 range needed

OpenGL custom rendering pipeline: Perspective matrix

I am attempting to work in LWJGL to display a simple quad using my own matrices. I've been looking around for awhile and have found a few perspective matrix implementations, these two in particular:
[cot(fov/2)/a 0 0 0]
[0 cot(fov/2) 0 0]
[0 0 -f/(f-n) -1]
[0 0 -f*n/(f-n) 0]
and:
[cot(fov/2)/a 0 0 0]
[0 cot(fov/2) 0 0]
[0 0 -(f+n)/(f-n) -1]
[0 0 -(2*f*n)/(f-n) 0]
Both of these provide the same effect, as expected (got them from here and here, respectively). The issue is in my understanding of how multiplying this by the modelview matrix, then a vertex, then dividing each x, y, and z value by its w value gives a screen coordinate. More specifically, if I multiply either of these by the modelview matrix then by a vertex (10, 10, 0, 1), it gives a w=0. That in itself is a big smack in the face. I conclude either the matrices are wrong, or I am missing something completely. In my actual test program, the vertices don't even end up on screen even though the camera position at (0,0,0) and no rotation would make it so. I even have tried many different z values, positive and negative, to see if it was just a clipping plane. Am I missing something here?
EDIT: After a lot of checking over, I've narrowed down the problem I am facing. The biggest issue is that the z-axis does not appear to be remapped to the range I specify (n to f). Any object just zooms in or out a little bit when I translate it along the z-axis then pops out of existence as it moves past the range [-1, 1]. I think this is also making me more confused. I set my far plane to 100 and my near to 0.1, and it behaves like anything but.
Both of these provide the same effect, as expected
While the second projection matrix form is very standard, the first one gives a different effect. If you have z==1 and w==0, the projection will be:
Matrix 1: -f/(f-n) / -f*n/(f-n) = f / f*n = 1 / n
Matrix 2: -(f+n)/(f-n) / -(2*f*n)/(f-n) = (f+n) / (2*f2n)
The result is clearly different. You should always use the second form.
if I multiply either of these by the modelview matrix then by a vertex
(10, 10, 0, 1), it gives a w=0. That in itself is a big smack in the
face
For a focal length d the projection is computed as (ignoring aspect ratio):
x'= d*x/z = x / w
y'= d*y/z = y / w
where
w = z / d
If you have z==0 this means that you want to project a point that is already in the eye and only points beyond d are visible. In practice this point will be clipped because z is not within the range n (near) and f (far) (n is expected as a positive constant)