opengl select sphere with mouse - opengl

I have a number of spheres in 3d space which the user should be able to select with a mouse click. Now I've seen some examples around using gluUnProject so I gave it a shot. So I have (please correct me every step of the way if I'm wrong because I'm not 100% sure of any part of it):
def compute_pos(x, y, z):
'''
Compute the 3d opengl coordinates for 3 coordinates.
#param x,y: coordinates from canvas taken with mouse position
#param z: coordinate for z-axis
#return; (gl_x, gl_y, gl_z) tuple corresponding to coordinates in OpenGL context
'''
modelview = numpy.matrix(glGetDoublev(GL_MODELVIEW_MATRIX))
projection = numpy.matrix(glGetDoublev(GL_PROJECTION_MATRIX))
viewport = glGetIntegerv(GL_VIEWPORT)
winX = float(x)
winY = float(viewport[3] - float(y))
winZ = z
return gluUnProject(winX, winY, winZ, modelview, projection, viewport)
Then, having the x and y of a mouse click and the position of the center of the sphere:
def is_picking(x, y, point):
ray_start = compute_pos(x, y, -1)
ray_end = compute_pos(x, y, 1)
d = _compute_2d_distance( (ray_start[0], ray_start[1]),
(ray_end[0], ray_end[1]),
(point[0], point[1]))
if d > CUBE_SIZE:
return False
d = _compute_2d_distance( (ray_start[0], ray_start[2]),
(ray_end[0], ray_end[2]),
(point[0], point[2]))
if d > CUBE_SIZE:
return False
d = _compute_2d_distance( (ray_start[1], ray_start[2]),
(ray_end[1], ray_end[2]),
(point[1], point[2]))
if d > CUBE_SIZE:
return False
return True
So because my 3d geometry is not good at all, I compute two points as a ray start and end point, the go into 2d 3 times eliminating one dimension at a time and compute the distance there between my line and the center of the sphere. If any of those distances are bigger than my sphere ray the it's not clicked. I think the formula for the distance is correct but just in case:
def _compute_2d_distance(p1, p2, target):
'''
Compute the distance between the line defined by two points and a target point.
#param p1: first point that defines the line
#param p2: second point that defines the line
#param target: the point to which distance needs to be computed
#return: distance from point to line
'''
if p2[0] != p1[0]:
if p2[1] == p1[1]:
return abs(p2[0] - p1[0])
a = (p2[1] - p1[1])/(p2[0] - p1[0])
b = -1
c = p1[1] + p1[0] * (p2[1] - p1[1]) / (p2[0] - p1[0])
d = abs(a * target[0] + b * target[1] + c) / math.sqrt(a * a + b * b)
return d
if p2[0] == p1[0]:
d = abs(p2[1] - p1[1])
return d
return None
Now the code seems to work fine in the start position. But after you use to mouse and rotate the screen even for a little bit, nothing works as expected anymore.

Hi there are a lot of solutions for this kind of problem.
Ray casting is one of the best but it involves a lot of geometry knowledge and it is not easy at all.
Moreover the gluUnProject is not available in other OpenGL implementations such as ES for mobile devices (though you can write it in your matrices manipulation functions).
I personally prefer the color picking solution which is quite flexible and very fast computing wise.
The idea is to render the select-able (only the select-able for performance boost) with a given unique color on an offscreen buffer.
Then you take the color of the pixel at the coordinates clicked by the user and you select the corresponding 3D object.
Cheers
Maurizio Benedetti

Related

Trying to deform any mesh into a sphere. How to translate the vertex position to lie on a sphere?

I am trying to write a deformer script for maya, using the maya API which deforms any mesh into a sphere by translating it's vertices.
What i already have is a deformer which translates every vertex of mesh in the direction of it's normal with the amount specified. This is done using the below equation.
point += normals[itGeo.index()] * bulgeAmount * w * env;
Where, point is the vertex on the mesh. normals[itGeo.index()] is a vector array which represents the normals of each vertex. w and env are to control the weights of the deformation and the envelope.
What this code basically does is, it translates the vertex in the direction of the normal with the amount specified. While this works for a sphere, because a sphere's vertex normals would point at the center. It would not work for other meshes as the normals would not point at the center of the mesh.
float bulgeAmount = data.inputValue(aBulgeAmount).asFloat();
float env = data.inputValue(envelope).asFloat();
MPoint point;
float w;
for (; !itGeo.isDone(); itGeo.next())
{
w = weightValue(data, geomIndex, itGeo.index());
point = itGeo.position();
point += normals[itGeo.index()] * bulgeAmount * w * env;
itGeo.setPosition(point);
}
I initially thought changing the direction of translation would solve the problem. As in, if we can find the vector in the direction from the center of the mesh to each vertex and translate it along that direction for an amount specified would solve it. Like so :
point += (Center - point) * bulgeAmount * w * env;
Where, Center is the center of the mesh. But this does not give the desired result. I also would want the deformer to be setup in such a way that the user can input radius "r" value and can also change the amount attribute from 0 to 1 to deform the mesh from it's original state to a spherical one. So that he can choose a value in between if her desires and the mesh would be something between a sphere and it's original shape.
This is my very first post in stackOverflow. I apologize if the format does not follow the community expectations. Any help on this will be greatly appreciated.
Thank You.
About the direction:
I think your line :
point += (Center - point) * bulgeAmount * w * env;
is a good starting point.
But instead of using (Center - point), you should use its opposite, (point-Center) and normalize it before using it. If you don't use a normalized version of this (point-Center) vector, every vertex will be translated to a wrong position.
About your variation between 0.0 (original) to 1.0 (sphere):
If Po is the original position
If Pf is the final position
If d is the original distance between the point Po and the Center C:
d=norm(Center - point) = norm(C-Po)
If Direction is (Center - point)/d (so normalized, as explained above)
What we want:
At r=0.0 your vertex must stay at its original position: Pf = Center + Direction * d
At r=1.0 your vertex must stick to the sphere of radius R: Pf = Center + Direction * R
And if we generalize:
Pf = C + Direction * ( r*R + (1-r)*d )
With d = norm(C-Po)
Direction = (C-Po)/d
R the radius of your sphere
and r a user param between [0.0; 1.0]
Not sure I am clear enough, I'm not used neither to answer here :)
Best

OpenGL sutherland-hodgman polygon clipping algorithm in homogeneous coordinates (4D, CCS)

I have two questions. (I marked 1, 2 below)
In OpenGl, the clipping is done by sutherland-hodgman.
However, I wonder how to work sutherland-hodgman algorithm in homogeneous system (4D)
I made a situation.
In VCS, there is a line, R= (0, 3, -2, 1), S = (0, 0, 1, 1) (End points of the line)
And a frustum is right = 1, left = -1, near = 1, far = 3, top = 4, bottom = -4
Therefore, the projection matrix P is
1 0 0 0
0 1/4 0 0
0 0 -2 -3
0 0 -1 0
If we calculate the line with the P, then the each end points is like that
R' = (0, 3/4, 1, 2), S' = (0, 0, -5, -1)
I know that perspective division should not be done now, because if we do perspective division, the clipping result is not correct.
Here I am curious. What makes a correct clipping because we did not just do perspective division. What mathematical properties are here?
How to calculate the clipping result in above situation?
(The fact that two intersections occur in the w-y coordinate system makes me confused. I thought the result line is one, not divided two parts)
I'm not quite sure whether you understood the sutherland-hodgman algorithm correctly (or at least I didn't get your example). Thus I will prove here, that it doesn't make any difference whether clipping happens before or after the perspective divide. The proof is only shown for one plane (clipping has to be done against all 6 planes), since applying multiple such clipping operations after each other makes not difference here.
Let's assume we have two points (as you described) R' and S' in clip space. And we have a clipping plane P given in hessian normal form [n, p] (if we take the left plane this is [1,0,0,1]).
If we would be calculating in pure 3d space (R^3), then checking whether a line crosses this plane would be done by calculating the signed distance of both points to the plane and checking if the sign is different. The signed distance for a point X = [x/w,y/w,z/w] is given by
D = dot(n, X) + p
Let's write down the actual equation we have (including the perspective divide):
d = n_x * x/w + n_y * y/w + n_z * z/w + p
In order to find the exact intersection point, we would, again in R^3 space, calculate for both points (A = R'/R'w, B = S'/S'w) the distance to the plane (da, db) and perform a linear interpolation (I will only write the equations for the x-coordinate here since y and z are working similar):
x = A_x * (1 - da/(da - db)) + A_y * (da/(da-db))
x = R'x/R'w * (1 - da/(da - db)) + S'x/S'w * (da/(da-db))
And w = 1 (since we interpolate between two points both having w = 1)
Now we already know from the previous discussion, that clipping has to happen before the perspective divide, thus we have to adapt this equation. This means, that for each point, the clipping cube has a different scaling w. Lt's see what happens when we try to perform the the same operations in P^3 (before the perspective divide):
First, we "revert" the perspective divide to get to X=[x,y,z,w] for which the distance to the plane is given by
d = n_x * x/w + n_y * y/w + n_z * z/w + p
d = (n_x * x + n_y * y + n_z * z) / w + p
d * w = n_x * x + n_y * y + n_z * z + p * w
d * w = dot([n, p], [x,y,z,w])
d * w = dot(P, X)
Since we are only interested in the sign of the whole calculation, which we haven't changed by our operations, we can compare the D*ws and get the same inside-out result as in R^3.
For the two points R' and S', the calculated distances in P^3 are dr = da * R'w and ds = db * S'w. When we now use the same interpolation equation as above but for R' and S' we get for x:
x' = R'x * (1 - (da * R'w)/(da * R'w - db * S'w)) + S'x * (da * R'w)/(da * R'w - db * S'w)
On the first view this looks rather different from the result we got in R^3, but since we are still in P^3 (thus x'), we still have to do the perspective divide on the result (this is allowed here, since the interpolated point will always be at the border of the view-frustum and thus dividing by w will not introduce any problems). The interpolated w component is given as:
w' = R'w * (1 - (da * R'w)/(da * R'w - db * S'w)) + S'w * (da * R'w)/(da * R'w - db * S'w)
And when calculating x/w we get
x = x' / w';
x = R'x/R'w * (1 - da/(da - db)) + S'x/S'w * (da/(da-db))
which is exactly the same result as when calculating everything in R^3.
Conclusion: The interpolation gives the same result, no matter if we perform the perspective divide first and interpolation afterwards or interpolating first and dividing then. But with the second variant we avoid the problem with points flipping from behind the viewer to the front since we are only dividing points that are guaranteed to be inside (or on the border) of the viewing frustum.
You speak of polygon clipping in a homogeneous system (4D) but from your question I assume that you actually mean homogeneous coordinates, which makes a lot more sense. (There are many possible homogenous systems.)
Ok, so you want to use "4D" coordinates, which are really "3D coordinates and a w term". The w term represents (projection transformations) the projective term that partially relates the screen-space coordinate to the original world space position. Assuming that you are NOT interested in projective space clipping, this term is not relevant.
I'm assuming this because the clipping box you describe is axis-aligned on planes in 3D. Even if it was rotated or scaled in 3D space, each of the planes would still be a 3D plane, the 4th coordinate always being '1'.
So how to clip:
clip line segment L against each of the planes of the clipping box, i.e. 6 clipping planes in total (you describe the normals of each clipping plane aptly), and see if any intersection point v is shared by the line and the tested plane P so that
v lies on the line segment (i.e. a t between 0 and 1)
v lies within the bounds of the plane P (i.e. the coordinate should not lie beyond any of the adjacent planes. Since you are using axis-aligned clipping planes, this is easy to check.)
Any of these intersections between a (3D + w) line and one of the 3D planes occurs in 3D, and intersection points have to be a 3D coordinates. You can extend each of these coordinates with a 4th w coordinate into a "4D" coordinate so that you can further transform them using 4x4 matrices for view and projection processing.

3d coordinate from point and angles

I'm working on a simple OpenGL world- and so far I've got a bunch of cubes randomly placed about and it's pretty fun to go zooming about. However I'm ready to move on. I would like to drop blocks in front of my camera, but I'm having trouble with the 3d angles. I'm used to 2d stuff where to find an end point we simply do something along the lines of:
endy = y + (sin(theta)*power);
endx = x + (cos(theta)*power);
However when I add the third dimension I'm not sure what to do! It seems to me that the power of the second dimensional plane would be determined by the z axis's cos(theta)*power, but I'm not positive. If that is correct, it seems to me I'd do something like this:
endz = z + (sin(xtheta)*power);
power2 = cos(xtheta) * power;
endx = x + (cos(ytheta) * power2);
endy = y + (sin(ytheta) * power2);
(where x theta is the up/down theta and y = left/right theta)
Am I even close to the right track here? How do I find an end point given a current point and an two angles?
Working with euler angles doesn't work so well in 3D environments, there are several issues and corner cases in which they simply don't work. And you actually don't even have to use them.
What you should do, is exploit the fact, that transformation matrixes are nothing else, then coordinate system bases written down in a comprehensible form. So you have your modelview matrix MV. This consists of a model space transformation, followed by a view transformation (column major matrices multiply right to left):
MV = V * M
So what we want to know is, in which way the "camera" lies within the world. That is given to you by the inverse view matrix V^-1. You can of course invert the view matrix using Gauss Jordan method, but most of the time your view matrix will consist of a 3×3 rotation matrix with a translation vector column P added.
R P
0 1
Recall that
(M * N)^-1 = N^-1 * M^-1
and also
(M * N)^T = M^T * N^T
so it seems there is some kind of relationship between transposition and inversion. Not all transposed matrices are their inverse, but there are some, where the transpose of a matrix is its inverse. Namely it are the so called orthonormal matrices. Rotations are orthonormal. So
R^-1 = R^T
neat! This allows us to find the inverse of the view matrix by the following (I suggest you try to proof it as an exersice):
V = / R P \
\ 0 1 /
V^-1 = / R^T -P \
\ 0 1 /
So how does this help us to place a new object in the scene at a distance from the camera? Well, V is the transformation from world space into camera space, so V^-1 transforms from camera to world space. So given a point in camera space you can transform it back to world space. Say you wanted to place something at the center of the view in distance d. In camera space that would be the point (0, 0, -d, 1). Multiply that with V^-1:
V^-1 * (0, 0, -d, 1) = (R^T)_z * d - P
Which is exactly what you want. In your OpenGL program you somewhere have your view matrix V, probably not properly named yet, but anyway it is there. Say you use old OpenGL-1 and GLU's gluLookAt:
void display(void)
{
/* setup viewport, clear, set projection, etc. */
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(...);
/* the modelview matrix now holds the View transform */
At this point we can extract the modelview matrix
GLfloat view[16];
glGetFloatv(GL_MODELVIEW_MATRIX, view);
Now view is in column major order. If we were to use it directly we could directly address the columns. But remember that transpose is inverse of a rotation, so we actually want the 3rd row vector. So let's assume you keep view around, so that in your event handler (outside display) you can do the following:
GLfloat z_row[3];
z_row[0] = view[2];
z_row[1] = view[6];
z_row[2] = view[10];
And we want the position
GLfloat * const p_column = &view[12];
Now we can calculate the new objects position at distance d:
GLfloat new_object_pos[3] = {
z_row[0]*d - p_column[0],
z_row[1]*d - p_column[1],
z_row[2]*d - p_column[2],
};
There you are. As you can see, nowhere you had to work with angles or trigonometry, it's just straight linear algebra.
Well I was close, after some testing, I found the correct formula for my implementation, it looks like this:
endy = cam.get_pos().y - (sin(toRad(180-cam.get_rot().x))*power1);
power2 = cos(toRad(180-cam.get_rot().x))*power1;
endx = cam.get_pos().x - (sin(toRad(180-cam.get_rot().y))*power2);
endz = cam.get_pos().z - (cos(toRad(180-cam.get_rot().y))*power2);
This takes my camera's position and rotational angles and get's the corresponding points. Works like a charm =]

Screen Projection and Culling united

I am currently dealing with several thousand boxes that i'd like to project onto the screen to determinate their sizes and distances to the camera.
My current approach is to get a sphere representing the box and project that using view and projection matrices and the viewport values.
// PSEUDOCODE
// project box center from world into viewspace
boxCenterInViewSpace = viewMatrix * boxCenter;
// get two points left and right of center
leftPoint = boxCenter - radius;
right = boxCenter + radius;
// project points from view into eye space
leftPoint = projectionMatrix * leftPoint;
rightPoint = projectionMatrix * rightPoint;
// normalize points
leftPoint /= leftPoint.w;
rightPoint /= rightPoint.w;
// move to 0..1 range
leftPoint = leftPoint * 0.5 + 0.5;
rightPoint = rightPoint * 0.5 + 0.5;
// scale to viewport
leftPoint.x = leftPoint.x * viewPort.right + viewPort.left;
leftPoint.y = leftPoint.y * viewPort.bottom + viewPort.top;
rightPoint.x = rightPoint.x * viewPort.right + viewPort.left;
rightPoint.y = rightPoint.y * viewPort.bottom + viewPort.top;
// at this point i check if the node is visible on screen by comparing the points to the viewport
// calculate size
length(rightPoint - leftPoint)
At another point i calculate the distance of the box to the camera.
The first problem is that i won't know if the box is just below the viewport as i just calculate horizontal. Is there a way to project a real sphere onto the screen somehow? Some method that looks like:
float getSizeOfSphereProjectedOnScreen(vec3 midpoint, float radius)
The other question is simpler: In with coordinate space is the z coordinate corresponding to the distance to the camera?
To sum it up i want to calculate:
Is the Box in the view frustum?
What is the size of the Box on the screen?
What is the distance from Box to camera?
To simplify calculations i'd like to use a sphere representation for this but i don't know how to project a sphere.
[Updated]
What is the distance from Box to camera?
In
[which] coordinate space is the z
coordinate corresponding to the
distance to the camera?
The answer is none of the usual spaces. The closest one would be in view space (i.e. after you apply the view matrix but not the projection matrix). In view space, the distance to the camera should be sqrt(x*x + y*y + z*z), because the camera is at the origin. (z would be a reasonable approximation only if |x| and |y| were really small relative to |z|.) This is assuming that knowing the distance from the camera to the center of the box is good enough.
I think if you really wanted a space in which the z coordinate corresponds to the distance to the camera, you'd need to map a spherical locus of points sqrt(x*x + y*y + z*z) = d to a plane z = d. I don't know that you can do that with a matrix.
Is the Box in the view frustum?
What is the size of the Box on the screen?
I think you're on the right track with this, but depending on which direction the camera is facing, your left and right points might not determine how wide the box looks or whether the box intersects the view frustum. See my answer to your other question for a long way to do this.

Perspective correct texture mapping; z distance calculation might be wrong

I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.