Find if point lies within a rectangle - c++

How can a find if a point lies within a 2D rectangle given 4 points?

Transform the point to a coordinate frame aligned with the rectangle, then the problem becomes axis-aligned and trivial.
If the rectangle consists of the following 4 points:
a b
c d
Then get the "x-axis" and "y-axis" of the rectangle as:
x = Normalize(d-c)
y = Normalize(a-c)
Then construct a rotation matrix using x and y as columns:
r = [ x | y ]
If you're using 3-d coordinates, we need a z axis:
z = CrossProduct(x, y)
r = [ x | y | z ]
Your transform matrix from world coordinates to your rectangle's axis-aligned coordinates becomes:
T = [ r^T | -r^T * c ]
[ 0^T | 1 ]
Here we've chosen the lower-left corner c to be the local origin. "r^T" is r transposed. "0^T" is either a 2-d or 3-d row-vector filled with zeros. 1 is just a one. Note that this is just the inverse of the simpler rectangle-to-world transform, which is
T^-1 = [ r | c ]
[ 0^T | 1 ]
We can use T to transform the point to axis-aligned coordinates. Remember to pad p with a trailing 1, since T is a homogeneous matrix.
tp = T * p; // Don't forget to pad p with a trailing 1 before multiplying.
// Checks that p isn't below or to the left of the rectangle.
for ( int d = 0; d < num_dimensions; ++d ) {
if ( tp[d] < 0.0 ) {
return false;
}
}
// Checks that p isn't to the right of the rectangle
double width = Length(d-c);
if ( tp[0] > width ) {
return false;
}
// Checks that p isn't above the rectangle.
double height = Length(a-c);
if ( tp[1] > height ) {
return false;
}
// p must be inside or on the rectangle.
return true
If you're using 3d coordinates, note that the above disregards the local z value of transformed point tp. Even if p is out of the plane of the rectangle, the above behaves as if it's been projected to the rectangle surface. If you want to check for coplanarity, just do the following beforehand:
if ( fabs(tp[2]) > some_small_positive_number ) {
return false; // point is out of the rectangle's plane.
}

I think this might answer your question
full disclosure - I went to Drexel for my grad dregree

To make it OpenGl specific:
I suppose your 2D rectangle is in screen coordinates!
first:
gluProject (bli, bla, blorp, ...);
to get from 3d to screen coordinates.
Then: Noah's suggestion.
Only shoot yourself if your point in question is already 2D ;)

For a non-axis-aligned rectangle, use the same algorithm as for general polygons: the point-in-polygon test:
Imagine a ray pointing rightward from the test point. Test whether each line in the polygon crosses the ray. If an even number of lines crosses the ray, then the point is outside the polygon. If an odd number of lines crosses the ray, then the point is inside the polygon.
In the case of a rectangle, between zero and two lines will cross the ray.
If a line touches the ray but does not cross it, the result is ambiguous. Therefore, in your calculations, imagine that the ray is an infinitely small amount ɛ higher than its y coordinate, so that it is impossible for a line to touch the ray without crossing it.
Given your test point (x,y) and line (x1,y1,x2,y2), testing whether a line crosses the ray is pretty simple. Assume, without loss of generality, that y1 < y2. Then
if y < y2 and y >= y1:
let x0 = x1 + (y-y1)/(y2-y1) * (x2-x1) // crossing point (x0,y)
if x0 > x:
crossing_detected++
http://en.wikipedia.org/wiki/Point_in_polygon

It's easy to test if a point lies in a triangle, so you can split your rectangle in two triangles and test these. See e.g. http://www.blackpawn.com/texts/pointinpoly/default.html

Generic test for point in quadrilateral is sufficient. The quad is defined as an ordered series of points. Handles both clockwise and counter-clockwise winding:
typedef struct {float x; float y} vec2;
bool pointIsInQuad(const vec2 point, const vec2 quad[4])
{
bool sides[4];
for (int i = 0; i < 4; i++) {
sides[i] = ((point.x - quad[i].x)*(quad[(i + 1)%4].y - quad[i].y) - (point.y - quad[i].y)*(quad[(i + 1)%4].x - quad[i].x)) > 0.0f;
}
return ((sides[0] == sides[1]) && (sides[0] == sides[2]) && (sides[0] == sides[3]));
}

Related

How can I iterate a coordinate sphere using an expanding spherical sector (cone)?

Given an integer 3D coordinate system, a center point P, a vector in some direction V, and a max sphere radius R:
I want to iterate over only integer points in a fashion that starts at P and goes along direction V until reaching the max radius R.
Then, for some small angle T iterate all points within the cone (or spherical sector) around V.
Incrementally expand T until T is pi/2 radians and every point within the sphere has been iterated.
I need to do this with O(1) space complexity. So the order of the points can't be precomputed/sorted but must result naturally from some math.
Example:
// Vector3 represents coordinates x, y, z
// where (typically) x is left/right, y is up/down, z is depth
Vector3 center = Vector3(0, 0, 0); // could be anything
Vector3 direction = Vector3(0, 100, 0); // could be anything
int radius = 4;
double piHalf = acos(0.0); // half of pi
std::queue<Vector3> list;
for (double angle = 0; angle < piHalf; angle+= .1)
{
int x = // confusion begins here
int y = // ..
int z = // ..
list.push(Vector3(x, y, z));
}
See picture for this example
The first coordinates that should be caught are:
A(0,0,0), C(0,1,0), D(0,2,0), E(0,3,0), B(0,4,0)
Then, expanding the angle somewhat (orange cone):
K(-1,0,3), X(1,0,3), (0,1,3), (0,-1,3)
Expanding the angle a bit more (green cone):
F(1,1,3), (-1,-1,3), (1,-1,3) (-1,1,3)
My guess for what would be next is:
L(1,0,2), (-1,0,2), (0,1,2), (0,-1,2)
M(2,0,3) would be hit somewhat after
Extra notes and observations:
A cone will hit a max of four points at its base, if the vector is perpendicular to an axis and originates at an integer point. It may also hit points along the cone wall depending on the angle
I am trying to do this in c++
I am aware of how to check whether a point X is within any given cone or spherical vector by comparing the angle between V and PX with T and am currently using this knowledge for a lesser solution.
This is not a homework question, I am working on a 3D video game~
iterate all integer positions Q in your sphere
simple 3x nested for loops through x,y,z in range <P-R,P+R> will do. Just check inside sphere so
u=(x,y,z)-P;
dot(u,u) <= R*R
test if point Q is exactly on V
simply by checking angle between PQ and V by dot product:
u = Q-P
u = u/|u|
v = V/|V|
if (dot(u,v)==1) point Q is on V
test if points is exactly on surface of "cone"
simply by checking angle between PQ and V by dot product:
u = Q-P
u = u/|u|
v = V/|V|
if (dot(u,v)==cos(T/2)) point Q is on "cone"
where I assume T is full "cone" angle not the half one.
Beware you need to use floats/double for this and make the comparison with some margin for error like:
if (fabs(dot(u,v)-1.0 )<1e-6) point Q is on V
if (fabs(dot(u,v)-cos(T/2))<1e-6) point Q is on "cone"

Circle collision with compound object

I would like to do a collision detection between circle and section of a circular ring. The circle is defined by it's position position and it's radius. The other object is defined by inner and outer radius and then a startPoint and endPoint both [x, y] points.
In the examples below, this is the circle and other is the ring section.
First I just check if it's colliding with the full ring. This works without a problem.
float mag = this.position.Magnitude();
if (mag < other.InnerRadius() - this.radius ||
mag > other.OuterRadius() + this.radius) {
return false;
}
But then I need to check if the circle is inside or outside of the section defined by the two points. Closest I was able to get was to check if it isn't colliding with the start and end vectors, but this returns wrong results when the circle is fully inside the ring section.
auto dot1 = Vector::Dot(position, other.StartPoint());
auto projected1 = dot1 / Vector::Dot(other.StartPoint(), other.StartPoint()) * other.StartPoint();
auto distance1 = Vector::Distance(position, projected1);
auto dot2 = Vector::Dot(position, other.EndPoint());
auto projected2 = dot2 / Vector::Dot(other.EndPoint(), other.EndPoint()) * other.EndPoint();
auto distance2 = Vector::Distance(position, projected2);
return distance1 < radius || distance2 < radius;
What is the easiest way to check if a circle is colliding with a object defined by these two vectors?
Edit: all the point objects I'm using here are my custom Vector class that has implemented all the vector operations.
Edit2: just to clarify, the ring object has it's origin in [0, 0]
Here is a simple algorithm.
First, let's agree on variable names:
Here r1 ≤ r2, -π/2 ≤ a1 ≤ a2 ≤ π/2.
(As I was reminded in comments, you have start and end points rather than angles, but I'm going to use angles as they seem more convenient. You can easily obtain angles from points via atan2(y-ry, x-rx), just make sure that a1 ≤ a2. Or you can rewrite the algorithm to not use angles at all.)
We need to consider 3 different cases. The case depends on where the circle center is located relative to the ring segment:
In the 1st case, as you already figured, collision occurs if length of vector (cx-rx, cy-ry) is greater than r1-rc and less than r2+rc.
In the 2nd case collision occurs if the distane between the circle center and the closest straight edge is less than rc.
In the 3rd case collision occurs if the distance between the circle center and the closest of 4 corners is less than rc.
Here's some pseudocode:
rpos = vec2(rx,ry); // Ring segment center coordinates
cpos = vec2(cx,cy); // Circle coordinates
a = atan2(cy-ry, cx-rx); // Relative angle
r = length(cpos - rpos); // Distance between centers
if (a > a1 && a < a2) // Case 1
{
does_collide = (r+rc > a1 && r-rc < a2);
}
else
{
// Ring segment corners:
p11 = vec2(cos(a1), sin(a1)) * r1;
p12 = vec2(cos(a1), sin(a1)) * r2;
p21 = vec2(cos(a2), sin(a2)) * r1;
p22 = vec2(cos(a2), sin(a2)) * r2;
if (((cpos-p11) · (p12-p11) > 0 && (cpos-p12) · (p11-p12) > 0) ||
((cpos-p21) · (p22-p21) > 0 && (cpos-p22) · (p21-p22) > 0)) // Case 2
{
// Normals of straight edges:
n1 = normalize(vec2(p12.y - p11.y, p11.x - p12.x));
n2 = normalize(vec2(p21.y - p22.y, p22.x - p21.x));
// Distances to edges:
d1 = n1 · (cpos - p11);
d2 = n2 · (cpos - p21);
does_collide = (min(d1, d2) < rc);
}
else // Case 3
{
// Squared distances to corners
c1 = length_sqr(cpos-p11);
c2 = length_sqr(cpos-p12);
c3 = length_sqr(cpos-p21);
c4 = length_sqr(cpos-p22);
does_collide = (sqrt(min(c1, c2, c3, c4)) < rc);
}
}
To compare the small circle to a ray:
First check to see whether the circle encloses the origin; if it does, then it intersects the ray. Otherwise, read on.
Consider the vector v from the origin to the center of the circle. Normalize that, normalize the ray R, and take the cross product Rxv. If it's positive, v is counterclockwise from R, otherwise it's clockwise from R. Either way, take acos to get the angle between them.
If the circle has radius r and its center is a distance d from the origin, then the angular half-width of the circle (as seen from the origin) is asin(r/d). If the angle between R and v is less than that, then the circle intersects the ray.
Assume that you know whether the object extends clockwise or counterclockwise from Start to End. (The numbers won't tell you that, you must know it already or the problem is unsolvable.) In your example, it's clockwise. Now you have to be careful; if the angular length of the arc is <= pi, then you can proceed, otherwise it is easier to determine whether the circle is in the smaller sector outside the sector of the object. But assuming the object spans less that pi, the circle is inside the sector of the object (i.e. between the rays) if and only if it is clockwise from the Start and counterclockwise from the End.

How do I get three non-colinear points on a plane? - C++

I'm trying to implement at line-plane intersection algorithm. According to Wikipedia I need three non-colinear points on the plane to do that.
I therefore tried implementing this algorithm in C++, however. Something is definitely wrong, cause it makes no sense that I can choose whatever x and y coordinates and they'll fit in the plane. What if the plane is vertical and along the x-axis? No point with y=1 would then be in the plane.
I realize that this problem has been posted a lot on StackOverflow, and I see lots of solutions where the plane is defined by 3 points. But I only have a normal and a position. And I can't test my line-plane intersection algorithm before I sort out my non-colinear point finder.
The problem right now, is that I'm dividing by normal.z, and that obviously won't work when normal.z is 0.
I'm testing with this plane: Plane* p = new Plane(Color(), Vec3d(0.0,0.0,0.0), Vec3d(0.0,1.0,0.0)); // second parameter : position, third parameter : normal
The current code gives this incorrect answer:
{0 , 0 , 0} // alright, this is the original
{12.8377 , 17.2728 , -inf} // obviously this is not a non-colinear point on the given plane
Here's my code:
std::vector<Vec3d>* Plane::getThreeNonColinearPoints() {
std::vector<Vec3d>* v = new std::vector<Vec3d>();
v->push_back(Vec3d(position.x, position.y, position.z)); // original position can serve as one of the three non-colinear points.
srandom(time(NULL));
double rx, ry, rz, start;
rx = Plane::fRand(10.0, 20.0);
ry = Plane::fRand(10.0, 20.0);
// Formula from here: http://en.wikipedia.org/wiki/Plane_(geometry)#Definition_with_a_point_and_a_normal_vector
// nx(x-x0) + ny(y-y0) + nz(z-z0) = 0
// |-----------------| <- this is "start"
//I'll try to insert position as x0,y0,z0 and normal as nx,ny,nz, and solve the equation
start = normal.x * (rx - position.x) + normal.y * (ry - position.y);
// nz(z-z0) = -start
start = -start;
// (z-z0) = start/nz
start /= normal.z; // division by zero
// z = start+z0
start += position.z;
rz = start;
v->push_back(Vec3d(rx, ry, rz));
// TODO one more point
return v;
}
I realize that I might be trying to solve this totally wrong. If so, please link a concrete implementation of this. I'm sure it must exist, when I see so many line-plane intersection implementations.
Thanks in advance.
A plane can be defined with several ways. Typically a point on the plane and a normal vector is used. To get the normal vector from three points (P1, P2, P3 ) take the cross product of the side of the triangle
P1 = {x1, y1, z1};
P2 = {x2, y2, z2};
P3 = {x3, y3, z3};
N = UNIT( CROSS( P2-P1, P3-P1 ) );
Plane P = { P1, N }
The reverse, to go from a point P1 and normal N to three points, you start from any direction G not along the normal N such that DOT(G,N)!=0. The two orthogonal directions along the plane are then
//try G={0,0,1} or {0,1,0} or {1,0,0}
G = {0,0,1};
if( MAG(CROSS(G,N))<TINY ) { G = {0,1,0}; }
if( MAG(CROSS(G,N))<TINY ) { G = {1,0,0}; }
U = UNIT( CROSS(N, G) );
V = CROSS(U,N);
P2 = P1 + U;
P3 = P1 + V;
A line is defined by a point and a direction. Typically two points (Q1, Q2) define the line
Q1 = {x1, y1, z1};
Q2 = {x2, y2, z2};
E = UNIT( Q2-Q1 );
Line L = { Q1, E }
The intersection of the line and plane are defined by the point on the line r=Q1+t*E that intersects the plane such that DOT(r-P1,N)=0. This is solved for the scalar distance t along the line as
t = DOT(P1-Q1,N)/DOT(E,N);
and the location as
r = Q1+(t*E);
NOTE: The DOT() returns the dot-product of two vector, CROSS() the cross-product, and UNIT() the unit vector (with magnitude=1).
DOT(P,Q) = P[0]*Q[0]+P[1]*Q[1]+P[2]*Q[2];
CROSS(P,Q) = { P[1]*Q[2]-P[2]*Q[1], P[2]*Q[0]-P[0]*Q[2], P[0]*Q[1]-P[1]*Q[0] };
UNIT(P) = {P[0]/sqrt(DOT(P,P)), P[1]/sqrt(DOT(P,P)), P[2]/sqrt(DOT(P,P))};
t*P = { t*P[0], t*P[1], t*P[2] };
MAG(P) = sqrt(P[0]*P[0]+P[1]*P[1]+P[2]*P[2]);
One approach you may find easy to implement is to see where the plane intersects the coordinate axes. For the plane given by the equationaX + bY + cZ - d = 0 hold two variables at 0 and solve for the third. So the solutions would be (assuming a, b, c, and d are all non-zero):
(d/a, 0, 0)
(0, d/b, 0)
(0, 0, d/c)
You will need to consider the cases where one or more of the coefficients are 0 so you don't get a degenerate or colinear solutions. As an example if exactly one of the coefficients is 0 (say a=0) you instead use
(1, d/b, 0)
(0, d/b, 0)
(0, 0, d/c)
If exactly two of the coefficients are 0 (say a=0 and b=0) you can use:
(1, 0, d/c)
(0, 1, d/c)
(0, 0, d/c)
If d=0, the plane intersects the three axes at the origin, and so you can use:
(1, 0, -a/c)
(0, -c/b, 1)
(-b/a, 1, 0)
You will need to work out simular cases for d and exactly one other coefficient being 0, as well as d and two others being 0. There should be a total of 16 cases, but there are a few things that come to mind which should make that somewhat more manageable.
Where N=(Nx,Ny,Nz) is the normal, you could project the points N, (Ny,Nz,Nx), (Nz,Nx,Ny) onto the plane: they're guaranteed to be distinct.
Alternatively, if P and Q are on the plane, P+t(Q-P)xN is also on the plane for any t!=0 where x is the cross product.
Alternatively if M!=N is an arbitrary vector, K=MxN and L=KxN are colinear with the plane and any point p on the plane can be written as p=Origin+sK+tL for some s,t.

Using camera rotation to make quad always appear in front of the camera (c++/opengl)

I'm trying to make a quad appear always in front of the camera, I'm trying to start by aligning it with the camera on the x-z plane and making sure it always faces the camera. I used this code...
float ry = cameraRY+PI_2;
float dis = 12;
float sz = 4;
float x = cameraX-dis*cosf(ry);
float y = cameraY;
float z = cameraZ-dis*sinf(ry)+cosf(ry)*sz;
float x2 = x + sinf(ry)*sz;
float y2 = y + sz;
float z2 = z - cosf(ry)*sz;
glVertex3f(x,y,z);
glVertex3f(x2,y,z2);
glVertex3f(x2,y2,z2);
glVertex3f(x,y2,z);
But it didn't quite look right, it seemed that the quad was rotating around a invisible point that was rotating correctly around the camera. I don't really know how to change it or how to go about doing this, any help appreciated!
Edit: Forgot to mention,
cameraX,cameraY,cameraZ are the camera's x,y,z positions
cameraRX and cameraRY are the camera's x and y rotations (Z rotation is always zero)
Check out this old tutorial on Lighthouse3D. It describes several "billboarding" techniques, which I believe are what you want.
Let P be your model view projection matrix, and c be the center of the quad you are trying to draw. You want to find a pair of vectors u, v that determine the edges of your quad,
Q = [ c-u-v, c-u+v, c-u-v, c+u-v ]
Such that u is pointing directly down in clip coordinates, while v is pointing to the right:
P(u) = (0, s, 0, 0)
P(v) = (s, 0, 0, 0)
Where s is the desired scale of your quad. Suppose that P is written in block diagonal form,
[ M | t ]
P = [-----------]
[ 0 0 1 | 0 ]
Then let m0, m1 be the first two rows of M. Now consider the equation we got for P(u), substituting and simplifying, we get:
[ 0 ]
P(u) ~> M u = [ s ]
[ 0 ]
Which leads to the following solution for u, v:
u = s * m1 / |m1|^2
v = s * m0 / |m0|^2

Perspective correct texture mapping; z distance calculation might be wrong

I'm making a software rasterizer, and I've run into a bit of a snag: I can't seem to get perspective-correct texture mapping to work.
My algorithm is to first sort the coordinates to plot by y. This returns a highest, lowest and center point. I then walk across the scanlines using the delta's:
// ordering by y is put here
order[0] = &a_Triangle.p[v_order[0]];
order[1] = &a_Triangle.p[v_order[1]];
order[2] = &a_Triangle.p[v_order[2]];
float height1, height2, height3;
height1 = (float)((int)(order[2]->y + 1) - (int)(order[0]->y));
height2 = (float)((int)(order[1]->y + 1) - (int)(order[0]->y));
height3 = (float)((int)(order[2]->y + 1) - (int)(order[1]->y));
// x
float x_start, x_end;
float x[3];
float x_delta[3];
x_delta[0] = (order[2]->x - order[0]->x) / height1;
x_delta[1] = (order[1]->x - order[0]->x) / height2;
x_delta[2] = (order[2]->x - order[1]->x) / height3;
x[0] = order[0]->x;
x[1] = order[0]->x;
x[2] = order[1]->x;
And then we render from order[0]->y to order[2]->y, increasing the x_start and x_end by a delta. When rendering the top part, the delta's are x_delta[0] and x_delta[1]. When rendering the bottom part, the delta's are x_delta[0] and x_delta[2]. Then we linearly interpolate between x_start and x_end on our scanline. UV coordinates are interpolated in the same way, ordered by y, starting at begin and end, to which delta's are applied each step.
This works fine except when I try to do perspective correct UV mapping. The basic algorithm is to take UV/z and 1/z for each vertex and interpolate between them. For each pixel, the UV coordinate becomes UV_current * z_current. However, this is the result:
The inversed part tells you where the delta's are flipped. As you can see, the two triangles both seem to be going towards different points in the horizon.
Here's what I use to calculate the Z at a point in space:
float GetZToPoint(Vec3 a_Point)
{
Vec3 projected = m_Rotation * (a_Point - m_Position);
// #define FOV_ANGLE 60.f
// static const float FOCAL_LENGTH = 1 / tanf(_RadToDeg(FOV_ANGLE) / 2);
// static const float DEPTH = HALFHEIGHT * FOCAL_LENGTH;
float zcamera = DEPTH / projected.z;
return zcamera;
}
Am I right, is it a z buffer issue?
ZBuffer has nothing to do with it.
THe ZBuffer is only useful when triangles are overlapping and you want to make sure that they are drawn correctly (e.g. correctly ordered in the Z). The ZBuffer will, for every pixel of the triangle, determine if a previously placed pixel is nearer to the camera, and if so, not draw the pixel of your triangle.
Since you are drawing 2 triangles which don't overlap, this can not be the issue.
I've made a software rasterizer in fixed point once (for a mobile phone), but I don't have the sources on my laptop. So let me check tonight, how I did it. In essence what you've got is not bad! A thing like this could be caused by a very small error
General tips in debugging this is to have a few test triangles (slope left-side, slope right-side, 90 degree angles, etc etc) and step through it with the debugger and see how your logic deals with the cases.
EDIT:
peudocode of my rasterizer (only U, V and Z are taken into account... if you also want to do gouraud you also have to do everything for R G and B similar as to what you are doing for U and V and Z:
The idea is that a triangle can be broken down in 2 parts. The top part and the bottom part. The top is from y[0] to y[1] and the bottom part is from y[1] to y[2]. For both sets you need to calculate the step variables with which you are interpolating. The below example shows you how to do the top part. If needed I can supply the bottom part too.
Please note that I do already calculate the needed interpolation offsets for the bottom part in the below 'pseudocode' fragment
first order the coords(x,y,z,u,v) in the order so that coord[0].y < coord[1].y < coord[2].y
next check if any 2 sets of coordinates are identical (only check x and y). If so don't draw
exception: does the triangle have a flat top? if so, the first slope will be infinite
exception2: does the triangle have a flat bottom (yes triangles can have these too ;^) ) then the last slope too will be infinite
calculate 2 slopes (left side and right side)
leftDeltaX = (x[1] - x[0]) / (y[1]-y[0]) and rightDeltaX = (x[2] - x[0]) / (y[2]-y[0])
the second part of the triangle is calculated dependent on: if the left side of the triangle is now really on the leftside (or needs swapping)
code fragment:
if (leftDeltaX < rightDeltaX)
{
leftDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
rightDeltaX2 = rightDeltaX
leftDeltaU = (u[1]-u[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaU2 = (u[2]-u[1]) / (y[2]-y[1])
leftDeltaV = (v[1]-v[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaV2 = (v[2]-v[1]) / (y[2]-y[1])
leftDeltaZ = (z[1]-z[0]) / (y[1]-y[0]) //for texture mapping
leftDeltaZ2 = (z[2]-z[1]) / (y[2]-y[1])
}
else
{
swap(leftDeltaX, rightDeltaX);
leftDeltaX2 = leftDeltaX;
rightDeltaX2 = (x[2]-x[1]) / (y[2]-y[1])
leftDeltaU = (u[2]-u[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaU2 = leftDeltaU
leftDeltaV = (v[2]-v[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaV2 = leftDeltaV
leftDeltaZ = (z[2]-z[0]) / (y[2]-y[0]) //for texture mapping
leftDeltaZ2 = leftDeltaZ
}
set the currentLeftX and currentRightX both on x[0]
set currentLeftU on leftDeltaU, currentLeftV on leftDeltaV and currentLeftZ on leftDeltaZ
calc start and endpoint for first Y range: startY = ceil(y[0]); endY = ceil(y[1])
prestep x,u,v and z for the fractional part of y for subpixel accuracy (I guess this is also needed for floats)
For my fixedpoint algorithms this was needed to make the lines and textures give the illusion of moving in much finer steps then the resolution of the display)
calculate where x should be at y[1]: halfwayX = (x[2]-x[0]) * (y[1]-y[0]) / (y[2]-y[0]) + x[0]
and same for U and V and z: halfwayU = (u[2]-u[0]) * (y[1]-y[0]) / (y[2]-y[0]) + u[0]
and using the halfwayX calculate the stepper for the U and V and z:
if(halfwayX - x[1] == 0){ slopeU=0, slopeV=0, slopeZ=0 } else { slopeU = (halfwayU - U[1]) / (halfwayX - x[1])} //(and same for v and z)
do clipping for the Y top (so calculate where we are going to start to draw in case the top of the triangle is off screen (or off the clipping rectangle))
for y=startY; y < endY; y++)
{
is Y past bottom of screen? stop rendering!
calc startX and endX for the first horizontal line
leftCurX = ceil(startx); leftCurY = ceil(endy);
clip the line to be drawn to the left horizontal border of the screen (or clipping region)
prepare a pointer to the destination buffer (doing it through array indexes everytime is too slow)
unsigned int buf = destbuf + (ypitch) + startX; (unsigned int in case you are doing 24bit or 32 bits rendering)
also prepare your ZBuffer pointer here (if you are using this)
for(x=startX; x < endX; x++)
{
now for perspective texture mapping (using no bilineair interpolation you do the following):
code fragment:
float tv = startV / startZ
float tu = startU / startZ;
tv %= texturePitch; //make sure the texture coordinates stay on the texture if they are too wide/high
tu %= texturePitch; //I'm assuming square textures here. With fixed point you could have used &=
unsigned int *textPtr = textureBuf+tu + (tv*texturePitch); //in case of fixedpoints one could have shifted the tv. Now we have to multiply everytime.
int destColTm = *(textPtr); //this is the color (if we only use texture mapping) we'll be needing for the pixel
dummy line
dummy line
dummy line
optional: check the zbuffer if the previously plotted pixel at this coordinate is higher or lower then ours.
plot the pixel
startZ += slopeZ; startU+=slopeU; startV += slopeV; //update all interpolators
} end of x loop
leftCurX+= leftDeltaX; rightCurX += rightDeltaX; leftCurU+= rightDeltaU; leftCurV += rightDeltaV; leftCurZ += rightDeltaZ; //update Y interpolators
} end of y loop
//this is the end of the first part. We now have drawn half the triangle. from the top, to the middle Y coordinate.
// we now basically do the exact same thing but now for the bottom half of the triangle (using the other set of interpolators)
sorry about the 'dummy lines'.. they were needed to get the markdown codes in sync. (took me a while to get everything sort off looking as intended)
let me know if this helps you solve the problem you are facing!
I don't know that I can help with your question, but one of the best books on software rendering that I had read at the time is available online Graphics Programming Black Book by Michael Abrash.
If you are interpolating 1/z, you need to multiply UV/z by z, not 1/z. Assuming you have this:
UV = UV_current * z_current
and z_current is interpolating 1/z, you should change it to:
UV = UV_current / z_current
And then you might want to rename z_current to something like one_over_z_current.