How could I derive a value for how aligned my normals are with a point in space?
For example if my normal faces directly away from point x, it will have a value of 1, whereas if it faces directly towards, it will have a value of 0, or something similar.
normalize(point - vertex) gives a direction vector from the vertex to the point
dot(normal - normalize(point - vertex)) gives the cosine of the angle between the vertex normal and this direction (1 when the same, -1 when opposite)
0.5 - 0.5 * dot(normal, normalize(point - vertex)) inverts and scales this to the 0 to 1 range needed
Related
I am parsing an OBJ which has texture coordinates more than 1 and less than 0 as well. I then write it back by making the UV values in the range [0,1]. Based on the understanding from another question on SO, am doing the conversion to the range [0,1] as follows.
if (oldU > 1.0) or (oldU < 0.0):
oldU = math.modf(oldU)[0] # Returns the floating part
if oldU < 0.0 :
oldU = 1 + oldU
if (oldV > 1.0) or (oldV < 0.0):
oldV = math.modf(oldV)[0] # Returns the floating part
if oldV < 0.0:
oldV = 1 + oldV
But I see some jagged lines in my output obj file and the original obj file when rendered in some software:
Original
Restricted to [0,1]
This may work not as you've expected.
Given some triangle edge that starts at U=0.9 and ends at U=1.1 then after your UV clipping you'll get start at 0.9 but end at 0.1 so the triangle will use different part of your texture. I believe this happens at the bottom of your mesh.
In general there's no problem with using UV outside of 0-1 range so first try to render the mesh as it is and see if you have any problems.
If you really want to move UVs to 0-1 range then scale and move UVs instead of clipping them per vertex. Iterate over all vertices and store min and max values for U and V, then scale UV for every vartex, so min becomes 0 and max becomes 1.
As I understand it, in OpenGL polygons are usually clipped in clip space and only those triangles (or parts of the triangles if the clipping process splits them) that survive the comparison with +- w. This then requires implementation of a polygon clipping algorithm such as Sutherland-Hodgman.
I am implementing my own CPU rasterizer and for now would like to avoid doing that. I have the NDC coordinates of vertices available (not really normalized since I did not clip anything so the positions may not be in range [-1, 1]). I would like to interpolate these values for all pixels and only draw pixels the NDC coordinates of which fall within [-1, 1] in the x, y and z dimensions. I would then additionally perform the depth test.
Would this work? If yes what would the interpolation look like? Can I use the OpenGl spec (page 427 14.9) formula for attribute interpolation as described here? Alternatively, should I use the formula 14.10 which is used for depth (z) interpolation for all 3 coordinates (I don't really understand why a different one is used there)?
Update:
I have tried interpolating the NDC values per pixel by two methods:
w0, w1, w2 are the barycentric weights of the vertices.
1) float x_ndc = w0 * v0_NDC.x + w1 * v1_NDC.x + w2 * v2_NDC.x;
float y_ndc = w0 * v0_NDC.y + w1 * v1_NDC.y + w2 * v2_NDC.y;
float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z;
2)
float x_ndc = (w0*v0_NDC.x/v0_NDC.w + w1*v1_NDC.x/v1_NDC.w + w2*v2_NDC.x/v2_NDC.w) /
(w0/v0_NDC.w + w1/v1_NDC.w + w2/v2_NDC.w);
float y_ndc = (w0*v0_NDC.y/v0_NDC.w + w1*v1_NDC.y/v1_NDC.w + w2*v2_NDC.y/v2_NDC.w) /
(w0/v0_NDC.w + w1/w1_NDC.w + w2/v2_NDC.w);
float z_ndc = w0 * v0_NDC.z + w1 * v1_NDC.z + w2 * v2_NDC.z;
The clipping + depth test always looks like this:
if (-1.0f < z_ndc && z_ndc < 1.0f && z_ndc < currentDepth &&
1.0f < y_ndc && y_ndc < 1.0f &&
-1.0f < x_ndc && x_ndc < 1.0f)
Case 1) corresponds to using equation 14.10 for their interpolation. Case 2) corresponds to using equation 14.9 for interpolation.
Results documented in gifs on imgur.
1) Strange things happen when the second cube is behind the camera or when I go into a cube.
2) Strange artifacts are not visible but as the camera approaches vertices, they start disappearing. And since this is the perspective correct interpolation of attributes vertices (nearer to the camera?) have greater weight so as soon as a vertex gets clipped this information is interpolated with strong weight to the triangle pixels.
Is all of this expected or have I done something wrong?
Clipping against the near plane is not strictly necessary, unless the triangle goes to or past 0 in the camera-space Z. Once that happens, the homogeneous coordinate math gets weird.
Most hardware only bothers to clip triangles if they extend more than a screen's width outside the clip space or if they cross the camera-Z of zero. This kind of clipping is called "guard-band clipping", and it saves a lot of performance, since clipping isn't cheap.
So yes, the math can work fine. The main thing you have to do, when setting up your scan lines, is figure out where each of them start/end on screen. The interpolation math is the same either way.
I don't see any reason why this wouldn't work. But it will be ways slower than traditional clipping. Note, that you might get into trouble with triangles close to the projection center since they will be vanishingly small and might cause problems in the barycentric coordinate calculation.
The difference between equation 14.9 and 14.10 is, that depth is basically z/w (and remapped to [0, 1]). Since the perspective divide has already happened, it has to be left away during interpolation.
What I am doing in vertex shader is:
shadowCoord = shadowVP * mMatrix * vec4(vertex_position,1.0);
Now to get it back in the range [-1, 1] I did this in the fragment shader:
vec3 proj = shadowCoord.xyz / shadowCoord.w;
But if I test the z value of such point I get a value bigger than 1.
The perspective matrix I use is obtained via:
glm::perspective(FOV, aspectRatio, near, far);
And it results in:
[2.4142 0 0 0
0 2.4142 0 0
0 0 -1.02 -1
0 0 -0.202 0]
and the shadowVP is:
shadow_Perp * shadow_View
Shouldn't proj.z be in the range [-1,1]?
Shouldn't proj.z be in the range [-1,1]?
No. It is in the range [-1,1] if the point lies inside the frustum. And the frustum is defined as -w <= x,y,z <= w for any vetrex in clip space (and that w varies per vertex). But you don't do any clipping, so any value can result here. Note two things:
While I said the implication "v inside the frustum" => "NDC coords in [-1,1]" holds true, the opposite does not. That means you can get the NDC coords inside [-1,1] for points which lie outside of the frusutm (that might even lie behind the "viewing position").
You might also get the division by 0 here.
I'm using a logarithmic depth algorithmic which results in someFunc(clipspace.z) being written to the depth buffer and no implicit perspective divide.
I'm doing RTT / postprocessing so later on in a fragment shader I want to recompute eyespace.xyz, given ndc.xy (from the fragment coordinates) and clipspace.z (from someFuncInv() on the value stored in the depth buffer).
Note that I do not have clipspace.w, and my stored value is not clipspace.z / clipspace.w (as it would be when using fixed function depth) - so something along the lines of ...
float clip_z = ...; /* [-1 .. +1] */
vec2 ndc = vec2(FragCoord.xy / viewport * 2.0 - 1.0);
vec4 clipspace = InvProjMatrix * vec4(ndc, clip_z, 1.0));
clipspace /= clipspace.w;
... does not work here.
So is there a way to calculate clipspace.w out of clipspace.xyz, given the projection matrix or it's inverse?
clipspace.xy = FragCoord.xy / viewport * 2.0 - 1.0;
This is wrong in terms of nomenclature. "Clip space" is the space that the vertex shader (or whatever the last Vertex Processing stage is) outputs. Between clip space and window space is normalized device coordinate (NDC) space. NDC space is clip space divided by the clip space W coordinate:
vec3 ndcspace = clipspace.xyz / clipspace.w;
So the first step is to take our window space coordinates and get NDC space coordinates. Which is easy:
vec3 ndcspace = vec3(FragCoord.xy / viewport * 2.0 - 1.0, depth);
Now, I'm going to assume that your depth value is the proper NDC-space depth. I'm assuming that you fetch the value from a depth texture, then used the depth range near/far values it was rendered with to map it into a [-1, 1] range. If you didn't, you should.
So, now that we have ndcspace, how do we compute clipspace? Well, that's obvious:
vec4 clipspace = vec4(ndcspace * clipspace.w, clipspace.w);
Obvious and... not helpful, since we don't have clipspace.w. So how do we get it?
To get this, we need to look at how clipspace was computed the first time:
vec4 clipspace = Proj * cameraspace;
This means that clipspace.w is computed by taking cameraspace and dot-producting it by the fourth row of Proj.
Well, that's not very helpful. It gets more helpful if we actually look at the fourth row of Proj. Granted, you could be using any projection matrix, and if you're not using the typical projection matrix, this computation becomes more difficult (potentially impossible).
The fourth row of Proj, using the typical projection matrix, is really just this:
[0, 0, -1, 0]
This means that the clipspace.w is really just -cameraspace.z. How does that help us?
It helps by remembering this:
ndcspace.z = clipspace.z / clipspace.w;
ndcspace.z = clipspace.z / -cameraspace.z;
Well, that's nice, but it just trades one unknown for another; we still have an equation with two unknowns (clipspace.z and cameraspace.z). However, we do know something else: clipspace.z comes from dot-producting cameraspace with the third row of our projection matrix. The traditional projection matrix's third row looks like this:
[0, 0, T1, T2]
Where T1 and T2 are non-zero numbers. We'll ignore what these numbers are for the time being. Therefore, clipspace.z is really just T1 * cameraspace.z + T2 * cameraspace.w. And if we know cameraspace.w is 1.0 (as it usually is), then we can remove it:
ndcspace.z = (T1 * cameraspace.z + T2) / -cameraspace.z;
So, we still have a problem. Actually, we don't. Why? Because there is only one unknown in this euqation. Remember: we already know ndcspace.z. We can therefore use ndcspace.z to compute cameraspace.z:
ndcspace.z = -T1 + (-T2 / cameraspace.z);
ndcspace.z + T1 = -T2 / cameraspace.z;
cameraspace.z = -T2 / (ndcspace.z + T1);
T1 and T2 come right out of our projection matrix (the one the scene was originally rendered with). And we already have ndcspace.z. So we can compute cameraspace.z. And we know that:
clispace.w = -cameraspace.z;
Therefore, we can do this:
vec4 clipspace = vec4(ndcspace * clipspace.w, clipspace.w);
Obviously you'll need a float for clipspace.w rather than the literal code, but you get my point. Once you have clipspace, to get camera space, you multiply by the inverse projection matrix:
vec4 cameraspace = InvProj * clipspace;
I have a set of (X,Y,Z) points representing different planar features. I need to calculate the slope of each plane using normal vectors.
i think slope is given by the angle between normal vector (NV) of each plane and NV of imaginary horizontal plane. Assume, the plane equation that I use is; Ax+By+c=z. Then i guess the normal vector of my plane is (a,b, -1). For my plane equation, what should be the equation of imaginary horizontal plane? I think equation of horizontal plane is z=c. Hence, the normal vector is (0,0,-1). Is this correct?
Then the angle between my plane and the horizontal plane is;
cos^(-1)〖(a.0+b.0+(-1).1)/(√(〖a1〗^2+〖b1〗^2+〖c1〗^2 ).√(0^2+0^2+1^2 ))〗
Is that correct? please comment me and give me the correct equation.
Yes, that's mostly correct, but you've made some small mistakes substituting into the expression for the angle. The angle is cos^{-1} [(a * 0 + b * 0 + (-1) * (-1) / (√{a^2 + b^2 + (-1)^2} * √{0^2+0^2+(-1)^2}] = cos^{-1}(1/√{a^2 + b^2 + 1})