Ray-triangle intersection algorithm not intersecting (C++) - c++

I've been trying to implement the Moller-Trumbore ray-triangle intersection algorithm in my raytracing code. The code is supposed to read in a mesh and light sources, fire off rays from the light source, and return the triangle from the mesh which each ray intersects. Here is my implementation of the algorithm:
//Moller-Trumbore intersection algorithm
void getFaceIntersect(modelStruct m, ray r, hitFaceStruct& hitFaces)
{
// Constant thoughout loop
point origin = r.p0;
point direction = r.u;
hitFaces.isHit = false;
for (int i = 0; i < m.faces; i++)
{
// Get face vertices
point v1 = m.vertList[m.faceList[i].v1];
point v2 = m.vertList[m.faceList[i].v2];
point v3 = m.vertList[m.faceList[i].v3];
// Get two edgess
point edge1 = v2 - v1;
point edge2 = v3 - v1;
// Get p
point p = direction.cross(direction, edge2);
// Use p to find determinant
double det = p.dot(edge1, p);
// If the determinant is about 0, the ray lies in the plane of the triangle
if (abs(det) < 0.00000000001)
{
continue;
}
double inverseDet = 1 / det;
point v1ToOrigin = (origin - v1);
double u = v1ToOrigin.dot(v1ToOrigin, p) * inverseDet;
// If u is not between 0 and 1, no hit
if (u < 0 || u > 1)
{
continue;
}
// Used for calculating v
point q = v1ToOrigin.cross(v1ToOrigin, edge1);
double v = direction.dot(direction, q) * inverseDet;
if (v < 0 || (u + v) > 1)
{
continue;
}
double t = q.dot(edge2, q) * inverseDet;
// gets closest face
if (t < abs(hitFaces.s)) {
hitFaceStruct goodStruct = hitFaceStruct();
goodStruct.face = i;
goodStruct.hitPoint = p;
goodStruct.isHit = true;
goodStruct.s = t;
hitFaces = goodStruct;
break;
}
}
}
The relevant code for hitFaceStruct and modelStruct is as follows:
typedef struct _hitFaceStruct
{
int face; // the index of the sphere in question in the list of faces
float s; // the distance from the ray that hit it
bool isHit;
point hitPoint;
} hitFaceStruct;
typedef struct _modelStruct {
char *fileName;
float scale;
float rot_x, rot_y, rot_z;
float x, y, z;
float r_amb, g_amb, b_amb;
float r_dif, g_dif, b_dif;
float r_spec, g_spec, b_spec;
float k_amb, k_dif, k_spec, k_reflective, k_refractive;
float spec_exp, index_refraction;
int verts, faces, norms = 0; // Number of vertices, faces, normals, and spheres in the system
point *vertList, *normList; // Vertex and Normal Lists
faceStruct *faceList; // Face List
} modelStruct;
Whenever I shoot a ray, the values of u or v in the algorithm code always come out to a large negative number, rather than the expected small, positive one. The direction vector of the ray is normalized before I pass it on to the intersection code, and I'm positive I'm firing rays that would normally hit the mesh. Can anyone please help me spot my error here?
Thanks!

Related

C++ - Deal with floating point errors in geometric interpolation

Problem
I am writing a ray tracer as a use case for a specific machine learning approach in Computer Graphics.
My problem is that, when I try to find the intersection between a ray and a surface, the result is not exact.
Basically, if I am scattering a ray from point O towards a surface located at (x,y,z), where z = 81, I would expect the solution to be something like S = (x,y,81). The problem is: I get a solution like (x,y,81.000000005).
This is of course a problem, because following operations depend on that solution, and it needs to be the exact one.
Question
My question is: how do people in Computer Graphics deal with this problem? I tried to change my variables from float to double and it does not solve the problem.
Alternative solutions
I tried to use the function std::round(). This can only help in specific situations, but not when the exact solution contains one or more significant digits.
Same for std::ceil() and std::floor().
EDIT
This is how I calculate the intersection with a surface (rectangle) parallel to the xz axes.
First of all, I calculate the distance t between the origin of my Ray and the surface. In case my Ray, in that specific direction, does not hit the surface, t is returned as 0.
class Rectangle_xy: public Hitable {
public:
float x1, x2, y1, y2, z;
...
float intersect(const Ray &r) const { // returns distance, 0 if no hit
float t = (y - r.o.y) / r.d.y; // ray.y = t* dir.y
const float& x = r.o.x + r.d.x * t;
const float& z = r.o.z + r.d.z * t;
if (x < x1 || x > x2 || z < z1 || z > z2 || t < 0) {
t = 0;
return 0;
} else {
return t;
}
....
}
Specifically, given a Ray and the id of an object in the list (that I want to hit):
inline Vec hittingPoint(const Ray &r, int &id) {
float t; // distance to intersection
if (!intersect(r, t, id))
return Vec();
const Vec& x = r.o + r.d * t;// ray intersection point (t calculated in intersect())
return x ;
}
The function intersect() in the previous snippet of code checks for every Rectangle in the List rect if I intersect some object:
inline bool intersect(const Ray &r, float &t, int &id) {
const float& n = NUMBER_OBJ; //Divide allocation of byte of the whole scene, by allocation in byte of one single element
float d;
float inf = t = 1e20;
for (int i = 0; i < n; i++) {
if ((d = rect[i]->intersect(r)) && d < t) { // Distance of hit point
t = d;
id = i;
}
}
// Return the closest intersection, as a bool
return t < inf;
}
The coordinate is then obtained using the geometric interpolation between a line and a surface in the 3D space:
Vec& x = r.o + r.d * t;
where:
r.o: it represents the ray origin. It's defined as a r.o : Vec(float a, float b, float c)
r.d : this is the direction of the ray. As before: r.d: Vec(float d, float e, float f).
t: float representing the distance between the object and the origin.
You could look into using std::numeric_limits<T>::epsilon for your float/double comparison. And see if your result is in the region +-epsilon.
An alternative would be to not ray trace towards a point. Maybe just place relatively small box or sphere there.

sorting points: concave polygon

I have a set of points that I'm trying to sort in ccw order or cw order from their angle. I want the points to be sorted in a way that they could form a polygon with no splits in its region or intersections. This is difficult because in most cases, it would be a concave polygon.
point centroid;
int main( int argc, char** argv )
{
// I read a set of points into a struct point array: points[n]
// Find centroid
double sx = 0; double sy = 0;
for (int i = 0; i < n; i++)
{
sx += points[i].x;
sy += points[i].y;
}
centroid.x = sx/n;
centroid.y = sy/n;
// sort points using in polar order using centroid as reference
std::qsort(&points, n, sizeof(point), polarOrder);
}
// -1 ccw, 1 cw, 0 collinear
int orientation(point a, point b, point c)
{
double area2 = (b.x-a.x)*(c.y-a.y) - (b.y-a.y)*(c.x-a.x);
if (area2 < 0) return -1;
else if (area2 > 0) return +1;
else return 0;
}
// compare other points relative to polar angle they make with this point
// (where the polar angle is between 0 and 2pi)
int polarOrder(const void *vp1, const void *vp2)
{
point *p1 = (point *)vp1;
point *p2 = (point *)vp2;
// translation
double dx1 = p1->x - centroid.x;
double dy1 = p1->y - centroid.y;
double dx2 = p2->x - centroid.x;
double dy2 = p2->y - centroid.y;
if (dy1 >= 0 && dy2 < 0) { return -1; } // p1 above and p2 below
else if (dy2 >= 0 && dy1 < 0) { return 1; } // p1 below and p2 above
else if (dy1 == 0 && dy2 ==0) { // 3-collinear and horizontal
if (dx1 >= 0 && dx2 < 0) { return -1; }
else if (dx2 >= 0 && dx1 < 0) { return 1; }
else { return 0; }
}
else return -orientation(centroid,*p1,*p2); // both above or below
}
It looks like the points are sorted accurately(pink) until they "cave" in, in which case the algorithm skips over these points then continues.. Can anyone point me into the right direction to sort the points so that they form the polygon I'm looking for?
Raw Point Plot - Blue, Pink Points - Sorted
Point List: http://pastebin.com/N0Wdn2sm (You can ignore the 3rd component, since all these points lie on the same plane.)
The code below (sorry it's C rather than C++) sorts correctly as you wish with atan2.
The problem with your code may be that it attempts to use the included angle between the two vectors being compared. This is doomed to fail. The array is not circular. It has a first and a final element. With respect to the centroid, sorting an array requires a total polar order: a range of angles such that each point corresponds to a unique angle regardless of the other point. The angles are the total polar order, and comparing them as scalars provides the sort comparison function.
In this manner, the algorithm you proposed is guaranteed to produce a star-shaped polyline. It may oscillate wildly between different radii (...which your data do! Is this what you meant by "caved in"? If so, it's a feature of your algorithm and data, not an implementation error), and points corresponding to exactly the same angle might produce edges that coincide (lie directly on top of each other), but the edges won't cross.
I believe that your choice of centroid as the polar origin is sufficient to guarantee that connecting the ends of the polyline generated as above will produce a full star-shaped polygon, however, I don't have a proof.
Result plotted with Excel
Note you can guess from the nearly radial edges where the centroid is! This is the "star shape" I referred to above.
To illustrate this is really a star-shaped polygon, here is a zoom in to the confusing lower left corner:
If you want a polygon that is "nicer" in some sense, you will need a fancier (probably much fancier) algorithm, e.g. the Delaunay triangulation-based ones others have referred to.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
struct point {
double x, y;
};
void print(FILE *f, struct point *p) {
fprintf(f, "%f,%f\n", p->x, p->y);
}
// Return polar angle of p with respect to origin o
double to_angle(const struct point *p, const struct point *o) {
return atan2(p->y - o->y, p->x - o->x);
}
void find_centroid(struct point *c, struct point *pts, int n_pts) {
double x = 0, y = 0;
for (int i = 0; i < n_pts; i++) {
x += pts[i].x;
y += pts[i].y;
}
c->x = x / n_pts;
c->y = y / n_pts;
}
static struct point centroid[1];
int by_polar_angle(const void *va, const void *vb) {
double theta_a = to_angle(va, centroid);
double theta_b = to_angle(vb, centroid);
return theta_a < theta_b ? -1 : theta_a > theta_b ? 1 : 0;
}
void sort_by_polar_angle(struct point *pts, int n_pts) {
find_centroid(centroid, pts, n_pts);
qsort(pts, n_pts, sizeof pts[0], by_polar_angle);
}
int main(void) {
FILE *f = fopen("data.txt", "r");
if (!f) return 1;
struct point pts[10000];
int n_pts, n_read;
for (n_pts = 0;
(n_read = fscanf(f, "%lf%lf%*f", &pts[n_pts].x, &pts[n_pts].y)) != EOF;
++n_pts)
if (n_read != 2) return 2;
fclose(f);
sort_by_polar_angle(pts, n_pts);
for (int i = 0; i < n_pts; i++)
print(stdout, pts + i);
return 0;
}
Well, first and foremost, I see centroid declared as a local variable in main. Yet inside polarOrder you are also accessing some centroid variable.
Judging by the code you posted, that second centroid is a file-scope variable that you never initialized to any specific value. Hence the meaningless results from your comparison function.
The second strange detail in your code is that you do return -orientation(centroid,*p1,*p2) if both points are above or below. Since orientation returns -1 for CCW and +1 for CW, it should be just return orientation(centroid,*p1,*p2). Why did you feel the need to negate the result of orientation?
Your original points don't appear form a convex polygon, so simply ordering them by angle around a fixed centroid will not necessarily result in a clean polygon. This is a non-trivial problem, you may want to research Delaunay triangulation and/or gift wrapping algorithms, although both would have to be modified because your polygon is concave. The answer here is an interesting example of a modified gift wrapping algorithm for concave polygons. There is also a C++ library called PCL that may do what you need.
But...if you really do want to do a polar sort, your sorting functions seem more complex than necessary. I would sort using atan2 first, then optimize it later once you get the result you want if necessary. Here is an example using lambda functions:
#include <algorithm>
#include <math.h>
#include <vector>
int main()
{
struct point
{
double x;
double y;
};
std::vector< point > points;
point centroid;
// fill in your data...
auto sort_predicate = [&centroid] (const point& a, const point& b) -> bool {
return atan2 (a.x - centroid.x, a.y - centroid.y) <
atan2 (b.x - centroid.x, b.y - centroid.y);
};
std::sort (points.begin(), points.end(), sort_predicate);
}

Where is my kd tree traversal code wrong?

I was optimizing my c++ raytracer. I'm tracing single rays through kdtrees. So far I was using Havran's recursive algorithm 'B', which seems antique and overblown for OOP. My new code is as short as possible (and hopefully more easily optimized by the compiler AND more easily maintained):
struct StackElement{
KDTreeNode<PT>* node;
float tmax;
array<float, 3> origin;
};
//initializing explicit stack
stack<StackElement> mystack;
//initialize local variables
KDTreeNode<PT>* node = tree.root;
array<float, 3> origin {ray.origin[0], ray.origin[1], ray.origin[2]};
const array<float, 3> direction {ray.direction[0], ray.direction[1], ray.direction[2]};
const array<float, 3> invDirection {1.0f / ray.direction[0], 1.0f / ray.direction[1], 1.0f / ray.direction[2]};
float tmax = numeric_limits<float>::max();
float tClosestIntersection = numeric_limits<float>::max();
bool notFullyTraversed = true;
while(notFullyTraversed) {
if (node->isLeaf()) {
//test all primitives inside the leaf
for (auto p : node->primitives()) {
p->intersect(ray, tClosestIntersection, intersection, tmax);
}
//test if leaf + empty stack => return
if (nodeStack.empty()) {
notFullyTraversed = false;
} else {
//pop all stack
origin = mystack.top().origin;
tmax = mystack.top().tmax;
node = mystack.top().node;
mystack.pop();
}
} else {
//get axis of node and its split plane
const int axis = node->axis();
const float plane = node->splitposition();
//test if ray is not parallel to plane
if ((fabs(direction[axis]) > EPSILON)) {
const float t = (plane - origin[axis]) * invDirection[axis];
//case of the ray intersecting the plane, then test both childs
if (0.0f < t && t < tmax) {
//traverse near first, then far. Set tmax = t for near
tmax = t;
//push only far child onto stack
mystack.push({
(origin[axis] > plane ) ? node->leftChild() : node->rightChild() ,
tmax - t,
{origin[0] + direction[0] * t, origin[1] + direction[1] * t, origin[2] + direction[2] * t}
});
}
}
//in every case: traverse near child first
node = (origin[axis] > plane) ? node->rightChild() : node->leftChild();
}
}
return intersection.found;
It's not traversing the far child often enough. Where do I miss a related case?
One problem was small (original, wrong code):
//traverse near first, then far. Set tmax = t for near
tmax = t;
//push only far child onto stack
mystack.push({ ... , tmax - t, ... });
it always pushes 0.0f onto the stack as the exit distance for the far node, meaning no positive t is accepted for intersections.
Swapping both lines fixes that problem.
My resurcive stack trace / decisions are still different (Havran takes about ~25% more iterations) with an output picture having 99,5% of the same pixels. That's inside floating point rounding issues, but it still does not answer the question: What case is not recognized by that simplified implementation? Or what operation is not (numerically) stable enough in this version?

Clip line to screen coordinates

I have line that is defined as two points.
start = (xs,ys)
end = (xe, ye)
Drawing function that I'm using Only accepts lines that are fully in screen coordinates.
Screen size is (xSize, ySize).
Top left corner is (0,0). Bottom right corner is (xSize, ySize).
Some other funcions gives me line that that is defined for example as start(-50, -15) end(5000, 200). So it's ends are outside of screen size.
In C++
struct Vec2
{
int x, y
};
Vec2 start, end //This is all little bit pseudo code
Vec2 screenSize;//You can access coordinates like start.x end.y
How can I calculate new start and endt that is at the screen edge, not outside screen.
I know how to do it on paper. But I can't transfer it to c++.
On paper I'm sershing for point that belongs to edge and line. But it is to much calculations for c++.
Can you help?
There are many line clipping algorithms like:
Cohen–Sutherland wikipedia page with implementation
Liang–Barsky wikipedia page
Nicholl–Lee–Nicholl (NLN)
and many more. see Line Clipping on wikipedia
[EDIT1]
See below figure:
there are 3 kinds of start point:
sx > 0 and sy < 0 (red line)
sx < 0 and sy > 0 (yellow line)
sx < 0 and sy < 0 (green and violet lines)
In situations 1 and 2 simply find Xintersect and Yintersect respectively and choose them as new start point.
As you can see, there are 2 kinds of lines in situation 3. In this situation find Xintersect and Yintersect and choose the intersect point near the end point which is the point that has minimum distance to endPoint.
min(distance(Xintersect, endPoint), distance(Yintersect, endPoint))
[EDIT2]
// Liang-Barsky function by Daniel White # http://www.skytopia.com/project/articles/compsci/clipping.html
// This function inputs 8 numbers, and outputs 4 new numbers (plus a boolean value to say whether the clipped line is drawn at all).
//
bool LiangBarsky (double edgeLeft, double edgeRight, double edgeBottom, double edgeTop, // Define the x/y clipping values for the border.
double x0src, double y0src, double x1src, double y1src, // Define the start and end points of the line.
double &x0clip, double &y0clip, double &x1clip, double &y1clip) // The output values, so declare these outside.
{
double t0 = 0.0; double t1 = 1.0;
double xdelta = x1src-x0src;
double ydelta = y1src-y0src;
double p,q,r;
for(int edge=0; edge<4; edge++) { // Traverse through left, right, bottom, top edges.
if (edge==0) { p = -xdelta; q = -(edgeLeft-x0src); }
if (edge==1) { p = xdelta; q = (edgeRight-x0src); }
if (edge==2) { p = -ydelta; q = -(edgeBottom-y0src);}
if (edge==3) { p = ydelta; q = (edgeTop-y0src); }
r = q/p;
if(p==0 && q<0) return false; // Don't draw line at all. (parallel line outside)
if(p<0) {
if(r>t1) return false; // Don't draw line at all.
else if(r>t0) t0=r; // Line is clipped!
} else if(p>0) {
if(r<t0) return false; // Don't draw line at all.
else if(r<t1) t1=r; // Line is clipped!
}
}
x0clip = x0src + t0*xdelta;
y0clip = y0src + t0*ydelta;
x1clip = x0src + t1*xdelta;
y1clip = y0src + t1*ydelta;
return true; // (clipped) line is drawn
}
Here is a function I wrote. It cycles through all 4 planes (left, top, right, bottom) and clips each point by the plane.
// Clips a line segment to an axis-aligned rectangle
// Returns true if clipping is successful
// Returns false if line segment lies outside the rectangle
bool clipLineToRect(int a[2], int b[2],
int xmin, int ymin, int xmax, int ymax)
{
int mins[2] = {xmin, ymin};
int maxs[2] = {xmax, ymax};
int normals[2] = {1, -1};
for (int axis=0; axis<2; axis++) {
for (int plane=0; plane<2; plane++) {
// Check both points
for (int pt=1; pt<=2; pt++) {
int* pt1 = pt==1 ? a : b;
int* pt2 = pt==1 ? b : a;
// If both points are outside the same plane, the line is
// outside the rectangle
if ( (a[0]<xmin && b[0]<xmin) || (a[0]>xmax && b[0]>xmax) ||
(a[1]<ymin && b[1]<ymin) || (a[1]>ymax && b[1]>ymax)) {
return false;
}
const int n = normals[plane];
if ( (n==1 && pt1[axis]<mins[axis]) || // check left/top plane
(n==-1 && pt1[axis]>maxs[axis]) ) { // check right/bottom plane
// Calculate interpolation factor t using ratio of signed distance
// of each point from the plane
const float p = (n==1) ? mins[axis] : maxs[axis];
const float q1 = pt1[axis];
const float q2 = pt2[axis];
const float d1 = n * (q1-p);
const float d2 = n * (q2-p);
const float t = d1 / (d1-d2);
// t should always be between 0 and 1
if (t<0 || t >1) {
return false;
}
// Interpolate to find the new point
pt1[0] = (int)(pt1[0] + (pt2[0] - pt1[0]) * t );
pt1[1] = (int)(pt1[1] + (pt2[1] - pt1[1]) * t );
}
}
}
}
return true;
}
Example Usage:
void testClipLineToRect()
{
int screenWidth = 320;
int screenHeight = 240;
int xmin=0;
int ymin=0;
int xmax=screenWidth-1;
int ymax=screenHeight-1;
int a[2] = {-10, 10};
int b[2] = {300, 250};
printf("Before clipping:\n\ta={%d, %d}\n\tb=[%d, %d]\n",
a[0], a[1], b[0], b[1]);
if (clipLineToRect(a, b, xmin, ymin, xmax, ymax)) {
printf("After clipping:\n\ta={%d, %d}\n\tb=[%d, %d]\n",
a[0], a[1], b[0], b[1]);
}
else {
printf("clipLineToRect returned false\n");
}
}
Output:
Before clipping:
a={-10, 10}
b=[300, 250]
After clipping:
a={0, 17}
b=[285, 239]

How do I use texture-mapping in a simple ray tracer?

I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading.
first image http://dl.dropbox.com/u/367232/Texture.jpg
The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map.
Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour.
bmp http://dl.dropbox.com/u/367232/vPoolWater.bmp
When this is done, it simply appears as this:
second image http://dl.dropbox.com/u/367232/texture2.jpg
Here are a few code snippets:
Color getColor(const Object *object,const Ray *ray, float *t)
{
if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) {
float distance = *t;
Point pnt = ray->origin + ray->direction * distance;
Point oc = object->center;
Vector ve = Point(oc.x,oc.y,oc.z+1) - oc;
Normalize(&ve);
Vector vn = Point(oc.x,oc.y+1,oc.z) - oc;
Normalize(&vn);
Vector vp = pnt - oc;
Normalize(&vp);
double phi = acos(-vn.dot(vp));
float v = phi / M_PI;
float u;
float num1 = (float)acos(vp.dot(ve));
float num = (num1 /(float) sin(phi));
float theta = num /(float) (2 * M_PI);
if (theta < 0 || theta == NAN) {theta = 0;}
if (vn.cross(ve).dot(vp) > 0) {
u = theta;
}
else {
u = 1 - theta;
}
int x = (u * IMAGE_WIDTH) -1;
int y = (v * IMAGE_WIDTH) -1;
int p = (y * IMAGE_WIDTH + x)*3;
return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]);
}
else {
return object->color;
}
};
I call the colour code here in Trace:
if (object->materialType == MATTE)
return getColor(object, ray, &t);
Ray shadowRay;
int isInShadow = 0;
shadowRay.origin.x = pHit.x + nHit.x * bias;
shadowRay.origin.y = pHit.y + nHit.y * bias;
shadowRay.origin.z = pHit.z + nHit.z * bias;
shadowRay.direction = light->object->center - pHit;
float len = shadowRay.direction.length();
Normalize(&shadowRay.direction);
float LdotN = shadowRay.direction.dot(nHit);
if (LdotN < 0)
return 0;
Color lightColor = light->object->color;
for (int k = 0; k < numObjects; k++) {
if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) {
if (objects[k]->materialType == GLASS)
lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color
else
isInShadow = 1;
break;
}
}
lightColor *= 1.f/(len*len);
return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN;
}
I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image.
Thanks.
It could be that the texture is just washed out because the light is so bright and so close. Notice how in the solid red case, there doesn't seem to be any gradation around the sphere. The red looks like it's saturated.
Your u,v mapping looks right, but there could be a mistake there. I'd add some assert statements to make sure u and v and really between 0 and 1 and that the p index into your TEXT_DATA array is also within range.
If you're debugging your textures, you should use a constant material whose color is determined only by the texture and not the lights. That way you can make sure you are correctly mapping your texture to your primitive and filtering it properly before doing any lighting on it. Then you know that part isn't the problem.