writing a ray tracer for class, and I'm getting an odd issue I can't seem to nail down the source of. I've got my texture and for some reason its rotating 90 clockwise, then flipping horizontally. I'm using barycentric coordinates to navigate my uv space coordinates.
I've already attempted to play with how i'm generating u,v,w. but it seems to result in the same issue.
In program issue visible
my actual test texture
//how i'm generating my barycentric coordinates:
Ph = Pe + Npe*Th; //Ph is the point in space that is being tested, I'm generating u,v,w while testing inside/outside triangle (Pe = Point of eye, Npe = vector from eye to point on triangle, Th = time hit)
A = glm::cross(P1 - P0, P2 - P0);
//glm::vec3 A0 = glm::cross(Ph - P1, Ph - P2);
//glm::vec3 A1 = glm::cross(Ph - P2, Ph - P0);
//glm::vec3 A2 = glm::cross(Ph - P0, Ph - P1);
glm::vec3 A0 = glm::cross(P1-Ph, P2-Ph);
glm::vec3 A1 = glm::cross(P2-Ph, P0-Ph);
glm::vec3 A2 = glm::cross(P0-Ph, P1-Ph);
if (glm::dot(n0, glm::normalize(A0)) < 0 || glm::dot(n0, glm::normalize(A1)) < 0 || glm::dot(n0, glm::normalize(A2)) < 0)
{
//point is outside triangle
return -1;
}
//normalize and chec k dot products to detrmine if they are facing the right way.
u = glm::length(A0) / glm::length(A);
v = glm::length(A1) / glm::length(A);
w = 1 - u - v;
And then here is the portion that uses that to calculate the texture coordinates.
//portion of code calculating texture coordinates
//calculate new location of texture coordinate, assume z position is 0
glm::vec3 textureCo = P0TexCo*this->u + P1TexCo*this->v + P2TexCo*this->w;
u = textureCo[0];
v = textureCo[1];
Found the issue, it had to do with how opengl interprets coordinates and what corner it starts in for displaying pixels from an array. the solution is to invert the image after you load it in.
Related
I have a plane defined by the standard plane equation a*x + b*y + c*z + d = 0, which I would like to be able to draw using OpenGL. How can I derive the four points needed to draw it as a quadrilateral in 3D space?
My plane type is defined as:
struct Plane {
float x,y,z; // plane normal
float d;
};
void DrawPlane(const Plane & p)
{
???
}
EDIT:
So, rethinking the question, what I actually wanted was to draw a discreet representation of a plane in 3D space, not an infinite plane.
Base on the answer provided by #a.lasram, I have produced this implementation, which doest just that:
void DrawPlane(const Vector3 & center, const Vector3 & planeNormal, float planeScale, float normalVecScale, const fColorRGBA & planeColor, const fColorRGBA & normalVecColor)
{
Vector3 tangent, bitangent;
OrthogonalBasis(planeNormal, tangent, bitangent);
const Vector3 v1(center - (tangent * planeScale) - (bitangent * planeScale));
const Vector3 v2(center + (tangent * planeScale) - (bitangent * planeScale));
const Vector3 v3(center + (tangent * planeScale) + (bitangent * planeScale));
const Vector3 v4(center - (tangent * planeScale) + (bitangent * planeScale));
// Draw wireframe plane quadrilateral:
DrawLine(v1, v2, planeColor);
DrawLine(v2, v3, planeColor);
DrawLine(v3, v4, planeColor);
DrawLine(v4, v1, planeColor);
// And a line depicting the plane normal:
const Vector3 pvn(
(center[0] + planeNormal[0] * normalVecScale),
(center[1] + planeNormal[1] * normalVecScale),
(center[2] + planeNormal[2] * normalVecScale)
);
DrawLine(center, pvn, normalVecColor);
}
Where OrthogonalBasis() computes the tangent and bi-tangent from the plane normal.
To see the plane as if it's infinite you can find 4 quad vertices so that the clipped quad and the clipped infinite plane form the same polygon. Example:
Sample 2 random points P1 and P2 on the plane such as P1 != P2.
Deduce a tangent t and bi-tangent b as
t = normalize(P2-P1); // get a normalized tangent
b = cross(t, n); // the bi-tangent is the cross product of the tangent and the normal
Compute the bounding sphere of the view frustum. The sphere would have a diameter D (if this step seems difficult, just set D to a large enough value such as the corresponding sphere encompasses the frustum).
Get the 4 quad vertices v1 , v2 , v3 and v4 (CCW or CW depending on the choice of P1 and P2):
v1 = P1 - t*D - b*D;
v2 = P1 + t*D - b*D;
v3 = P1 + t*D + b*D;
v4 = P1 - t*D + b*D;
One possibility (possibly not the cleanest) is to get the orthogonal vectors aligned to the plane and then choose points from there.
P1 = < x, y, z >
t1 = random non-zero, non-co-linear vector with P1.
P2 = norm(P1 cross t1)
P3 = norm(P1 cross P2)
Now all points in the desired plane are defined as a starting point plus a linear combination of P2 and P3. This way you can get as many points as desired for your geometry.
Note: the starting point is just your plane normal < x, y, z > multiplied by the distance from the origin: abs(d).
Also of interest, with clever selection of t1, you can also get P2 aligned to some view. Say you are looking at the x, y plane from some z point. You might want to choose t1 = < 0, 1, 0 > (as long as it isn't co-linear to P1). This yields P2 with 0 for the y component, and P3 with 0 for the x component.
for a project I need to compute the real world position and orientation of a camera
with respect to a known object.
I have a set of photos, each displays a chessboard from different points of view.
Using CalibrateCamera and solvePnP I am able to reproject Points in 2d, to get a AR-thing.
So my situation is as such:
Intrinsic parameters are known
Distortioncoefficients are known
translation Vector and rotation Vector are known per photo.
I simply cannot figure out how to compute the position of the camera. My guess was:
invert translation vector. (=t')
transform rotation vector to degree (seems to be radian) and invert
use rodriguez on rotation vector
compute RotationMatrix * t'
But the results are somehow totally off...
Basically I want to to compute a ray for each pixel in world coordinates.
If more informations on my problem are needed, I'd be glad to answer quickly.
I dont' get it... somehow the rays are still off. This is my Code btw:
Mat image1CamPos = tvecs[0].clone(); //From calibrateCamera
Mat rot = rvecs[0].clone(); //From calibrateCamera
Rodrigues(rot, rot);
rot = rot.t();
//Position of Camera
Mat pos = rot * image1CamPos;
//Ray-Normal (( (double)mk[i][k].x) are known image-points)
float x = (( (double)mk[i][0].x) / fx) - (cx / fx);
float y = (( (double)mk[i][0].y) / fy) - (cy / fy);
float z = 1;
float mag = sqrt(x*x + y*y + z*z);
x /= mag;
y /= mag;
z /= mag;
Mat unit(3, 1, CV_64F);
unit.at<double>(0, 0) = x;
unit.at<double>(1, 0) = y;
unit.at<double>(2, 0) = z;
//Rotation of Ray
Mat rot = stof1 * unit;
But when plotting this, the rays are off :/
The translation t (3x1 vector) and rotation R (3x3 matrix) of an object with respect to the camera equals the coordinate transformation from object into camera space, which is given by:
v' = R * v + t
The inversion of the rotation matrix is simply the transposed:
R^-1 = R^T
Knowing this, you can easily resolve the transformation (first eq.) to v:
v = R^T * v' - R^T * t
This is the transformation from camera into object space, i.e., the position of the camera with respect to the object (rotation = R^T and translation = -R^T * t).
You can simply get a 4x4 homogeneous transformation matrix from this:
T = ( R^T -R^T * t )
( 0 1 )
If you now have any point in camera coordinates, you can transform it into object coordiantes:
p' = T * (x, y, z, 1)^T
So, if you'd like to project a ray from a pixel with coordinates (a,b) (probably you will need to define the center of the image, i.e. the principal point as reported by CalibrateCamera, as (0,0)) -- let that pixel be P = (a,b)^T. Its 3D coordinates in camera space are then P_3D = (a,b,0)^T. Let's project a ray 100 pixel in positive z-direction, i.e. to the point Q_3D = (a,b,100)^T. All you need to do is transform both 3D coordinates into the object coordinate system using the transformation matrix T and you should be able to draw a line between both points in object space. However, make sure that you don't confuse units: CalibrateCamera will report pixel values while your object coordinate system might be defined in, e.g., cm or mm.
I'm trying to implement a line segment and plane intersection test that will return true or false depending on whether or not it intersects the plane. It also will return the contact point on the plane where the line intersects, if the line does not intersect, the function should still return the intersection point had the line segmenent had been a ray. I used the information and code from Christer Ericson's Real-time Collision Detection but I don't think im implementing it correctly.
The plane im using is derived from the normal and vertice of a triangle. Finding the location of intersection on the plane is what i want, regardless of whether or not it is located on the triangle i used to derive the plane.
The parameters of the function are as follows:
contact = the contact point on the plane, this is what i want calculated
ray = B - A, simply the line from A to B
rayOrigin = A, the origin of the line segement
normal = normal of the plane (normal of a triangle)
coord = a point on the plane (vertice of a triangle)
Here's the code im using:
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin, Vector normal, Vector coord) {
// calculate plane
float d = Dot(normal, coord);
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// scale the ray by t
Vector newRay = ray * t;
// calc contact point
contact = rayOrigin + newRay;
if (t >= 0.0f && t <= 1.0f) {
return true; // line intersects plane
}
return false; // line does not
}
In my tests, it never returns true... any ideas?
I am answering this because it came up first on Google when asked for a c++ example of ray intersection :)
The code always returns false because you enter the if here :
if (Dot(normal, ray)) {
return false; // avoid divide by zero
}
And a dot product is only zero if the vectors are perpendicular, which is the case you want to avoid (no intersection), and non-zero numbers are true in C.
Thus the solution is to negate ( ! ) or do Dot(...) == 0.
In all other cases there will be an intersection.
On to the intersection computation :
All points X of a plane follow the equation
Dot(N, X) = d
Where N is the normal and d can be found by putting a known point of the plane in the equation.
float d = Dot(normal, coord);
Onto the ray, all points s of a line can be expressed as a point p and a vector giving the direction D :
s = p + x*D
So if we search for which x s is in the plane, we have
Dot(N, s) = d
Dot(N, p + x*D) = d
The dot product a.b is transpose(a)*b.Let transpose(N) be Nt.
Nt*(p + x*D) = d
Nt*p + Nt*D*x = d (x scalar)
x = (d - Nt*p) / (Nt*D)
x = (d - Dot(N, p)) / Dot(N, D)
Which gives us :
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
We can now get the intersection point by putting x in the line equation
s = p + x*D
Vector intersection = rayOrigin + x*ray;
The above code updated :
bool linePlaneIntersection(Vector& contact, Vector ray, Vector rayOrigin,
Vector normal, Vector coord) {
// get d value
float d = Dot(normal, coord);
if (Dot(normal, ray) == 0) {
return false; // No intersection, the line is parallel to the plane
}
// Compute the X value for the directed line ray intersecting the plane
float x = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
// output contact point
*contact = rayOrigin + normalize(ray)*x; //Make sure your ray vector is normalized
return true;
}
Aside 1:
What does the d value mean ?
For two vectors a and b a dot product actually returns the length of the orthogonal projection of one vector on the other times this other vector.
But if a is normalized (length = 1), Dot(a, b) is then the length of the projection of b on a. In case of our plane, d gives us the directional distance all points of the plane in the normal direction to the origin (a is the normal). We can then get whether a point is on this plane by comparing the length of the projection on the normal (Dot product).
Aside 2:
How to check if a ray intersects a triangle ? (Used for raytracing)
In order to test if a ray comes into a triangle given by 3 vertices, you first have to do what is showed here, get the intersection with the plane formed by the triangle.
The next step is to look if this point lies in the triangle. This can be achieved using the barycentric coordinates, which express a point in a plane as a combination of three points in it. See Barycentric Coordinates and converting from Cartesian coordinates
I could be wrong about this, but there are a few spots in the code that seem very suspicious. To begin, consider this line:
// calculate plane
float d = Dot(normal, coord);
Here, your value d corresponds to the dot product between the plane normal (a vector) and a point in space (a point on the plane). This seems wrong. In particular, if you have any plane passing through the origin and use the origin as the coordinate point, you will end up computing
d = Dot(normal, (0, 0, 0)) = 0
And immediately returning false. I'm not sure what you intended to do here, but I'm pretty sure that this isn't what you meant.
Another spot in the code that seems suspicious is this line:
// Compute the t value for the directed line ray intersecting the plane
float t = (d - Dot(normal, rayOrigin)) / Dot(normal, ray);
Note that you're computing the dot product between the plane's normal vector (a vector) and the ray's origin point (a point in space). This seems weird because it means that depending on where the ray originates in space, the scaling factor you use for the ray changes. I would suggest looking at this code one more time to see if this is really what you meant.
Hope this helps!
This all looks fine to me. I've independently checked the algebra and this looks fine for me.
As an example test case:
A = (0,0,1)
B = (0,0,-1)
coord = (0,0,0)
normal = (0,0,1)
This gives:
d = Dot( (0,0,1), (0,0,0)) = 0
Dot( (0,0,1), (0,0,-2)) = -2 // so trap for the line being in the plane passes.
t = (0 - Dot( (0,0,1), (0,0,1) ) / Dot( (0,0,1), (0,0,-2)) = ( 0 - 1) / -2 = 1/2
contact = (0,0,1) + 1/2 (0,0,-2) = (0,0,0) // as expected.
So given the emendation following #templatetypedef's answer, the only area where I can see a problem is with the implementation of one of the other operations, be it Dot(), or the Vector operators.
This version worked for me in OpenGL C# application.
bool GetLinePlaneIntersection(out vec3 contact, vec3 ray_origin, vec3 ray_end, vec3 normal, vec3 coord)
{
contact = new vec3();
vec3 ray = ray_end - ray_origin;
float d = glm.dot(normal, coord);
if (glm.dot(normal, ray) == 0)
{
return false;
}
float t = (d - glm.dot(normal, ray_origin)) / glm.dot(normal, ray);
contact = ray_origin + ray * t;
return true;
}
I'm trying to make a quad appear always in front of the camera, I'm trying to start by aligning it with the camera on the x-z plane and making sure it always faces the camera. I used this code...
float ry = cameraRY+PI_2;
float dis = 12;
float sz = 4;
float x = cameraX-dis*cosf(ry);
float y = cameraY;
float z = cameraZ-dis*sinf(ry)+cosf(ry)*sz;
float x2 = x + sinf(ry)*sz;
float y2 = y + sz;
float z2 = z - cosf(ry)*sz;
glVertex3f(x,y,z);
glVertex3f(x2,y,z2);
glVertex3f(x2,y2,z2);
glVertex3f(x,y2,z);
But it didn't quite look right, it seemed that the quad was rotating around a invisible point that was rotating correctly around the camera. I don't really know how to change it or how to go about doing this, any help appreciated!
Edit: Forgot to mention,
cameraX,cameraY,cameraZ are the camera's x,y,z positions
cameraRX and cameraRY are the camera's x and y rotations (Z rotation is always zero)
Check out this old tutorial on Lighthouse3D. It describes several "billboarding" techniques, which I believe are what you want.
Let P be your model view projection matrix, and c be the center of the quad you are trying to draw. You want to find a pair of vectors u, v that determine the edges of your quad,
Q = [ c-u-v, c-u+v, c-u-v, c+u-v ]
Such that u is pointing directly down in clip coordinates, while v is pointing to the right:
P(u) = (0, s, 0, 0)
P(v) = (s, 0, 0, 0)
Where s is the desired scale of your quad. Suppose that P is written in block diagonal form,
[ M | t ]
P = [-----------]
[ 0 0 1 | 0 ]
Then let m0, m1 be the first two rows of M. Now consider the equation we got for P(u), substituting and simplifying, we get:
[ 0 ]
P(u) ~> M u = [ s ]
[ 0 ]
Which leads to the following solution for u, v:
u = s * m1 / |m1|^2
v = s * m0 / |m0|^2
I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.