I am implementing a Z-buffer to determine which pixels should be drawn in a simple scene filled with triangles. I have structural representations of a triangle, a vertex, a vector (the mathematical (x, y, z) kind, of course), as well as a function that draws an individual pixel to the screen. Here are the structures I have:
struct vertex{
float x, y, z;
... //other members for lighting, etc. that will be used later and are not relevant here
};
struct myVector{
float x, y, z;
};
struct triangle{
... //other stuff
vertex v[3];
};
Unfortunately, as I scan convert my triangles to the screen, which relies on calculating depths to determine what is visible and gets to be drawn, I am getting incorrect/unrealistic Z values (e.g., the depth at a point in the triangle is out of bounds of the depths of all 3 of its vertices)! I have been looking through my code over and over and cannot figure out whether my math is off or I have a careless mistake somewhere, so I will try to present exactly what I am trying to do in the hopes that someone else can see something that I don't. (And I have looked carefully at making sure that floating point values remain floating point values, that I am passing in arguments correctly, etc., so this is really baffling!)
Overall, my scan conversion algorithm fills pixels across a scan line like this (pseudocode):
for all triangles{
... //Do edge-related sorting stuff, etc...get ready to fill pixels
float zInit; //the very first z-value, with a longer calculation
float zPrev; //the "zk" needed when interpolating "zk+1" across a scan line
for(xPos = currentX at left side edge; xPos != currentX at right side edge; currentX++){
*if this is first pixel acorss scan line, calculate zInit and draw pixel/store z if depth is less
than current zBuffer value at this point. Then set zPrev = zInit.
*otherwise, interpolate zNext using zPrev. Draw pixel/store z if depth < current zBuffer value at
this point. Then set zPrev = zNext.
}
... //other scan conversion stuff...update x values, etc.
}
To get the value of zInit for each scan line, I consider the plane equation Ax + By + Cz + D = 0 and rearrange it to get z = -1*(Ax + By + D)/C, where x and y are plugged in as the current x value across a scan line and the current scan line value itself, respectively.
For subsequent z values across a scan line, I interpolate as zk+1 = zk - A/C, where A and C come from the plane equation.
To get the A, B and C for these z calculations, I need the normal vector of the plane defined by the 3 vertices (the array vertex v[3]) of the current triangle. To get this normal (which I named planeNormal in the code), I defined a cross product function:
myVector cross(float x1, float y1, float z1, float x2, float y2, float z2)
{
float crX = (y1*z2) - (z1*y2);
float crY = (z1*x2) - (x1*z2);
float crZ = (x1*y2) - (y1*x2);
myVector res;
res.x = crX;
res.y = crY;
res.z = crZ;
return res;
}
To get the D value for the plane equation/my z calculations, I use the plane equation A(x-x1) + B(y-y1) + C(z-z1) = 0, where (x1, y1, z1) is just a reference point in the plane. I just chose the triangle vertex v[0] for the reference point and rearranged:
Ax + By + Cz = Ax1 + By1 + Cz1
Thus, D = Ax1 + By1 + Cz1
So, finally, to get the A, B, C, and D for the z calculations, I did this for each triangle, where trianglelist[nt] is the triangle at current index nt in the overall triangle array for the scene:
float pA = planeNormal.x;
float pB = planeNormal.y;
float pC = planeNormal.z;
float pD = (pA*trianglelist[nt].v[0].x)+(pB*trianglelist[nt].v[0].y)+(pC*trianglelist[nt].v[0].z);
From here, within the scan conversion algorithm I described, I calculated the zs:
zInit = -1*((pA*cx)+(pB*scanLine)+(pD))/(pC); //cx is current x value; scanLine is current y value
...
...
float zNext = zPrev - (pA/pC);
Alas, after all that careful work, something is off! In some triangles, the depth values come out realistic (except for the sign). With triangle given by the vertices (200, 10, 75), (75, 200, 75) and (15, 60, 75), all depths come out as -75. The same happened for other triangles with all vertices at the same depth. But with the vertices (390, 300, 105), (170, 360, 80), (190, 240, 25), all of the z values are over 300! The very first one comes out as 310.5, and the rest just get bigger, with a max around 365. This should not happen when the deepest vertex is at z = 105!!! So, after all of the rambling, can anyone see what might have caused this? I wouldn't be surprised if it's a sign-related thing, but where (after all, the absolute values are right in the constant depth cases)?
The correct equations are:
n = cross (v[2] - v[0], v[1] - v[0]);
D = - dot (n, v[0]);
Note the minus sign.
you should have a look at www.scratchapixel.com, particularly this lesson:
http://scratchapixel.com/lessons/3d-advanced-lessons/perspective-and-orthographic-projection-matrix/
It contains a self-contained program that shows you how to project vertices.
Related
Given an integer 3D coordinate system, a center point P, a vector in some direction V, and a max sphere radius R:
I want to iterate over only integer points in a fashion that starts at P and goes along direction V until reaching the max radius R.
Then, for some small angle T iterate all points within the cone (or spherical sector) around V.
Incrementally expand T until T is pi/2 radians and every point within the sphere has been iterated.
I need to do this with O(1) space complexity. So the order of the points can't be precomputed/sorted but must result naturally from some math.
Example:
// Vector3 represents coordinates x, y, z
// where (typically) x is left/right, y is up/down, z is depth
Vector3 center = Vector3(0, 0, 0); // could be anything
Vector3 direction = Vector3(0, 100, 0); // could be anything
int radius = 4;
double piHalf = acos(0.0); // half of pi
std::queue<Vector3> list;
for (double angle = 0; angle < piHalf; angle+= .1)
{
int x = // confusion begins here
int y = // ..
int z = // ..
list.push(Vector3(x, y, z));
}
See picture for this example
The first coordinates that should be caught are:
A(0,0,0), C(0,1,0), D(0,2,0), E(0,3,0), B(0,4,0)
Then, expanding the angle somewhat (orange cone):
K(-1,0,3), X(1,0,3), (0,1,3), (0,-1,3)
Expanding the angle a bit more (green cone):
F(1,1,3), (-1,-1,3), (1,-1,3) (-1,1,3)
My guess for what would be next is:
L(1,0,2), (-1,0,2), (0,1,2), (0,-1,2)
M(2,0,3) would be hit somewhat after
Extra notes and observations:
A cone will hit a max of four points at its base, if the vector is perpendicular to an axis and originates at an integer point. It may also hit points along the cone wall depending on the angle
I am trying to do this in c++
I am aware of how to check whether a point X is within any given cone or spherical vector by comparing the angle between V and PX with T and am currently using this knowledge for a lesser solution.
This is not a homework question, I am working on a 3D video game~
iterate all integer positions Q in your sphere
simple 3x nested for loops through x,y,z in range <P-R,P+R> will do. Just check inside sphere so
u=(x,y,z)-P;
dot(u,u) <= R*R
test if point Q is exactly on V
simply by checking angle between PQ and V by dot product:
u = Q-P
u = u/|u|
v = V/|V|
if (dot(u,v)==1) point Q is on V
test if points is exactly on surface of "cone"
simply by checking angle between PQ and V by dot product:
u = Q-P
u = u/|u|
v = V/|V|
if (dot(u,v)==cos(T/2)) point Q is on "cone"
where I assume T is full "cone" angle not the half one.
Beware you need to use floats/double for this and make the comparison with some margin for error like:
if (fabs(dot(u,v)-1.0 )<1e-6) point Q is on V
if (fabs(dot(u,v)-cos(T/2))<1e-6) point Q is on "cone"
I am trying to produce random equilateral triangles on the console screen.
The method I am using is creating a center point for the triangle (randomly positioned), moving the center point to the origin (0,0) and then creating 3 points from the center (adding the radius(random number) of the triangle to the Y axis of each point). Then I rotate 2 of the points, one at 120 degrees and the other at 240 making an equilateral triangle then draw lines between the points. Then bring the points back to the original plot relating to the centroid.
This for the most past of the time works and I get an equilateral triangle, however other times I don't quite get an equilateral triangle and I am at a complete loss as to why.
I am using Brensenham's line algorithm to draw the line between points.
Image of working triangle: http://imgur.com/GpF406O
Image of broken triangle: http://imgur.com/Oa2BYun
Here is the code that plots the coords for the triangle:
void Triangle::createVertex(Vertex cent)
{
// angle of 120 in radians
double s120 = sin(2.0943951024);
double c120 = cos(2.0943951024);
// angle of 240 in radians
double s240 = sin(4.1887902048);
double c240 = cos(4.1887902048);
// bringing centroid to the origin and saving old pos to move later on
int x = cent.getX();
int y = cent.getY();
cent.setX(0);
cent.setY(0);
// creating the points all equal distance from the centroid
Vertex v1(cent.getX(), cent.getY() + radius);
Vertex v2(cent.getX(), cent.getY() + radius);
Vertex v3(cent.getX(), cent.getY() + radius);
// rotate points
double newx = v1.getX() * c120 - v1.getY() * s120;
double newy = v1.getY() * c120 + v1.getX() * s120;
double xnew = v2.getX() * c240 - v2.getY() * s240;
double ynew = v2.getY() * c240 + v2.getX() * s240;
// giving the points the actual location in relation the the old pos of the centroid
v1.setX(newx + x);
v1.setY(newy + y);
v2.setX(xnew + x);
v2.setY(ynew + y);
v3.setX(x);
v3.setY(y + radius);
// adding the to a list (list is used in a function to draw the lines)
vertices.push_back(v1);
vertices.push_back(v2);
vertices.push_back(v3);
}
Looking at the images of your two triangles (and looking at the line drawing algorithm) you are drawing lines as a series of discrete pixels. That means a vertex must fall in a pixel (it can't be on a boundary) like in this image.
So what happens if your vertex falls on* a border between pixels? Your line drawing algorithm has to make a decision on which pixel to put the vertex in.
Looking at the algorithm description on wikipedia and the c++ implementation on a page a www.cs.helsinki.fi
I see that both list implementations using integer arithmetic** which in this case is not unreasonable given you have discreet rows of pixels. This means that if your floating point calculations put one vertex above the threshold of the integer label for the next row of pixels when the floor (conversion from float to int) is done, but the other vertex is below that threshold then the two vertices will be placed on different rows.
think v1.y = 5.00000000000000000001 and v2.y = 4.99999999999999999999 which leads to v1 being placed on row 5 and v2 being placed on row 4.
This explains why you only see the issue occurring occasionally, you only occasionally have your vertices land on a boundary like this.
In order to fix a couple of things come to mind:
Fix it when you assign values to your vertices, the y values are the same anyways.
given:
v1.getX() = v2.getX() = 0 (defined by your code)
v1.getY() = v2.getY() = radius (defined by your code)
cos(120 degrees) = cos(240 degrees) ('tis true)
This reduces your two y values to
double newy = v1.getY() * c120
double ynew = v1.getY() * c120
ergo:
v1.setY(newy + y);
v2.setY(newy + y);
If you wrote your own Brensenham's algorithm implementation you could add a check in that code to make sure your vertices are at the same height, but that seems like a really bad place to put that kind of check since the height of the endpoints is specific to your problem and not drawing lines in general.
*Or not exactly on, but close enough you can't tell the difference after accounting for floating point error
**The algorithm is not restricted to integer arithmetic, but I suspect given the irregularity of your problem and the way the algorithm has been presented, along with the fact that you are using discreet characters for the lines in your images the integer arithmetic is the issue.
I want to fit a plane to a 3D point cloud. I use a RANSAC approach, where I sample several points from the point cloud, calculate the plane, and store the plane with the smallest error. The error is the distance between the points and the plane. I want to do this in C++, using Eigen.
So far, I sample points from the point cloud and center the data. Now, I need to fit the plane to the samples points. I know I need to solve Mx = 0, but how do I do this? So far I have M (my samples), I want to know x (the plane) and this fit needs to be as close to 0 as possible.
I have no idea where to continue from here. All I have are my sampled points and I need more data.
From you question I assume that you are familiar with the Ransac algorithm, so I will spare you of lengthy talks.
In a first step, you sample three random points. You can use the Random class for that but picking them not truly random usually gives better results. To those points, you can simply fit a plane using Hyperplane::Through.
In the second step, you repetitively cross out some points with large Hyperplane::absDistance and perform a least-squares fit on the remaining ones. It may look like this:
Vector3f mu = mean(points);
Matrix3f covar = covariance(points, mu);
Vector3 normal = smallest_eigenvector(covar);
JacobiSVD<Matrix3f> svd(covariance, ComputeFullU);
Vector3f normal = svd.matrixU().col(2);
Hyperplane<float, 3> result(normal, mu);
Unfortunately, the functions mean and covariance are not built-in, but they are rather straightforward to code.
Recall that the equation for a plane passing through origin is Ax + By + Cz = 0, where (x, y, z) can be any point on the plane and (A, B, C) is the normal vector perpendicular to this plane.
The equation for a general plane (that may or may not pass through origin) is Ax + By + Cz + D = 0, where the additional coefficient D represents how far the plane is away from the origin, along the direction of the normal vector of the plane. [Note that in this equation (A, B, C) forms a unit normal vector.]
Now, we can apply a trick here and fit the plane using only provided point coordinates. Divide both sides by D and rearrange this term to the right-hand side. This leads to A/D x + B/D y + C/D z = -1. [Note that in this equation (A/D, B/D, C/D) forms a normal vector with length 1/D.]
We can set up a system of linear equations accordingly, and then solve it by an Eigen solver as follows.
// Example for 5 points
Eigen::Matrix<double, 5, 3> matA; // row: 5 points; column: xyz coordinates
Eigen::Matrix<double, 5, 1> matB = -1 * Eigen::Matrix<double, 5, 1>::Ones();
// Find the plane normal
Eigen::Vector3d normal = matA.colPivHouseholderQr().solve(matB);
// Check if the fitting is healthy
double D = 1 / normal.norm();
normal.normalize(); // normal is a unit vector from now on
bool planeValid = true;
for (int i = 0; i < 5; ++i) { // compare Ax + By + Cz + D with 0.2 (ideally Ax + By + Cz + D = 0)
if ( fabs( normal(0)*matA(i, 0) + normal(1)*matA(i, 1) + normal(2)*matA(i, 2) + D) > 0.2) {
planeValid = false; // 0.2 is an experimental threshold; can be tuned
break;
}
}
This method is equivalent to the typical SVD-based method, but much faster. It is suitable for use when points are known to be roughly in a plane shape. However, the SVD-based method is more numerically stable (when the plane is far far away from origin) and robust to outliers.
I have object A, with a speed. Speed is specified as 3D vector a = (x, y, z). Position is 3D point A [X, Y, Z]. I need to find out, if the current speed leads this object to another object B on position B [X, Y, Z].
I've sucessfully implemented this in 2 dimensions, ignoring the third one:
/*A is projectile, B is static object*/
//entity is object A
// - .v[3] is the speed vector
//position[3] is array of coordinates of object B
double vector[3]; //This is the vector c = A-B
this->entityVector(-1, entity.id, vector); //Fills the correct data
double distance = vector_size(vector); //This is distance |AB|
double speed = vector_size(entity.v); //This is size of speed vector a
float dist_angle = (float)atan2(vector[2],vector[0])*(180.0/M_PI); //Get angle of vector c as seen from Y axis - using X, Z
float speed_angle = (float)atan2((double)entity.v[2],entity.v[0])*(180.0/M_PI); //Get angle of vector a seen from Y axis - using X, Z
dist_angle = deg180to360(dist_angle); //Converts value to 0-360
speed_angle = deg180to360(speed_angle); //Converts value to 0-360
int diff = abs((int)compare_degrees(dist_angle, speed_angle)); //Returns the difference of vectors direction
I need to create the very same comparison to make it work in 3D - right now, the Y positions and Y vector coordinates are ignored.
What calculation should I do to get the second angle?
Edit based on answer:
I am using spherical coordinates and comparing their angles to check if two vectors are pointing in the same direction. With one vector being the A-B and another A's speed, I'me checking id A is heading to B.
I'm assuming the "second angle" you're looking for is φ. That is to say, you're using spherical coordinates:
(x,y,z) => (r,θ,φ)
r = sqrt(x^2 + y^2 + z^2)
θ = cos^-1(z/r)
φ = tan^-1(y/x)
However, if all you want to do is find if A is moving with velocity a towards B, you can use a dot product for a basic answer.
1st vector: B - A (vector pointing from A to B)
2nd vector: a (velocity)
dot product: a * (B-A)
If the dot product is 0, it means that you're not getting any closer - you're moving around a sphere of constant radius ||B-A|| with B at the center. If the dot product > 0, you're moving towards the point, and if the dot product < 0, you're moving away from it.
I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.