Why do we choose the "bounding box" method to fill a triangle? - c++

I'm learning a short course "How OpenGL works: software rendering in 500 lines of code" on GitHub. In lesson 2, the author is teaching us how to fill a triangle with color. He comes up with two methods:
Enumerate all the horizontal segments within the triangle, and draw these segments. The author's code is as follows.
void triangle(Vec2i t0, Vec2i t1, Vec2i t2, TGAImage &image, TGAColor color) {
if (t0.y==t1.y && t0.y==t2.y) return; // I dont care about degenerate triangles
// sort the vertices, t0, t1, t2 lower−to−upper (bubblesort yay!)
if (t0.y>t1.y) std::swap(t0, t1);
if (t0.y>t2.y) std::swap(t0, t2);
if (t1.y>t2.y) std::swap(t1, t2);
int total_height = t2.y-t0.y;
for (int i=0; i<total_height; i++) {
bool second_half = i>t1.y-t0.y || t1.y==t0.y;
int segment_height = second_half ? t2.y-t1.y : t1.y-t0.y;
float alpha = (float)i/total_height;
float beta = (float)(i-(second_half ? t1.y-t0.y : 0))/segment_height; // be careful: with above conditions no division by zero here
Vec2i A = t0 + (t2-t0)*alpha;
Vec2i B = second_half ? t1 + (t2-t1)*beta : t0 + (t1-t0)*beta;
if (A.x>B.x) std::swap(A, B);
for (int j=A.x; j<=B.x; j++) {
image.set(j, t0.y+i, color); // attention, due to int casts t0.y+i != A.y
}
}
}
Find the bounding box of the triangle. Enumerate all the points in the bounding box, and use barycentric coordinates to check if the point is within the triangle. If the point is in the triangle, then fill the point with color. The author's code is as follows.
Vec3f barycentric(Vec2i *pts, Vec2i P) {
Vec3f u = cross(Vec3f(pts[2][0]-pts[0][0], pts[1][0]-pts[0][0], pts[0][0]-P[0]), Vec3f(pts[2][1]-pts[0][1], pts[1][1]-pts[0][1], pts[0][1]-P[1]));
if (std::abs(u[2])<1) return Vec3f(-1,1,1); // triangle is degenerate, in this case return smth with negative coordinates
return Vec3f(1.f-(u.x+u.y)/u.z, u.y/u.z, u.x/u.z);
}
void triangle(Vec2i *pts, TGAImage &image, TGAColor color) {
Vec2i bboxmin(image.get_width()-1, image.get_height()-1);
Vec2i bboxmax(0, 0);
Vec2i clamp(image.get_width()-1, image.get_height()-1);
for (int i=0; i<3; i++) {
for (int j=0; j<2; j++) {
bboxmin[j] = std::max(0, std::min(bboxmin[j], pts[i][j]));
bboxmax[j] = std::min(clamp[j], std::max(bboxmax[j], pts[i][j]));
}
}
Vec2i P;
for (P.x=bboxmin.x; P.x<=bboxmax.x; P.x++) {
for (P.y=bboxmin.y; P.y<=bboxmax.y; P.y++) {
Vec3f bc_screen = barycentric(pts, P);
if (bc_screen.x<0 || bc_screen.y<0 || bc_screen.z<0) continue;
image.set(P.x, P.y, color);
}
}
}
The author chooses the second method at the end of lesson 2, but I can't understand why. Is the reason something about efficiency, or it is just because the second method is easier to understand?

Barycentric coordinates are used to interpolate or "smear" values at each vertex of the triangle across the triangle. For example: if I define a triangle ABC, I can give each vertex a color, Red, Green, and Blue respectively. Then as I fill out the triangle, I can use the barycentric coordinates (alpha, beta, gamma) to get a linear combination P = alpha * Red + beta * Blue + gamma * Green to determine what the color at a point inside the triangle should be.
This process is highly optimized and built into GPU hardware. You can smear any values you'd like, including normal vectors (which is often used in per-pixel lighting computations), so it is a very useful operation.
Of course, I have no idea what your teacher is thinking, but I'd hazard to guess that in a future lesson they might talk about that so the second algorithm naturally leads into that discussion.
Source: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates

Related

Improper reflection in recursive ray tracer

I'm implementing a recursive ray tracer with reflection. The ray tracer is currently reflecting areas that are in shadow, and I don't know why. The shadow aspect of the ray tracer works as expected when the reflective code is commented out, so I don't think that's the issue.
Vec Camera::shade(Vec accumulator,
Ray ray,
vector<Surface*>surfaces,
vector<Light*>lights,
int recursion_depth) {
if (recursion_depth == 0) return Vec(0,0,0);
double closestIntersection = numeric_limits<double>::max();
Surface* cs;
for(unsigned int i=0; i < surfaces.size(); i++){
Surface* s = surfaces[i];
double intersection = s->intersection(ray);
if (intersection > EPSILON && intersection < closestIntersection) {
closestIntersection = intersection;
cs = s;
}
}
if (closestIntersection < numeric_limits<double>::max()) {
Point intersectionPoint = ray.origin + ray.dir*closestIntersection;
Vec intersectionNormal = cs->calculateIntersectionNormal(intersectionPoint);
Material materialToUse = cs->material;
for (unsigned int j=0; j<lights.size(); j++) {
Light* light = lights[j];
Vec dirToLight = (light->origin - intersectionPoint).norm();
Vec dirToCamera = (this->eye - intersectionPoint).norm();
bool visible = true;
for (unsigned int k=0; k<surfaces.size(); k++) {
Surface* s = surfaces[k];
double t = s->intersection(Ray(intersectionPoint, dirToLight));
if (t > EPSILON && t < closestIntersection) {
visible = false;
break;
}
}
if (visible) {
accumulator = accumulator + this->color(dirToLight, intersectionNormal,
intersectionPoint, dirToCamera, light, materialToUse);
}
}
//Reflective ray
//Vec r = d − 2(d · n)n
if (materialToUse.isReflective()) {
Vec d = ray.dir;
Vec r_v = d-intersectionNormal*2*intersectionNormal.dot(d);
Ray r(intersectionPoint+intersectionNormal*EPSILON, r_v);
//km is the ideal specular component of the material, and mult is component-wise multiplication
return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km);
}
else
return accumulator;
}
else
return accumulator;
}
Vec Camera::color(Vec dirToLight,
Vec intersectionNormal,
Point intersectionPoint,
Vec dirToCamera,
Light* light,
Material material) {
//kd I max(0, n · l) + ks I max(0, n · h)p
Vec I(light->r, light->g, light->b);
double dist = (intersectionPoint-light->origin).magnitude();
I = I/(dist*dist);
Vec h = (dirToLight + dirToCamera)/((dirToLight + dirToCamera).magnitude());
Vec kd = material.kd;
Vec ks = material.ks;
Vec diffuse = kd*I*fmax(0.0, intersectionNormal.dot(dirToLight));
Vec specular = ks*I*pow(fmax(0.0, intersectionNormal.dot(h)), material.r);
return diffuse+specular;
}
I've provided my output and the expected output. The lighting looks a bit different b/c mine was originally an .exr file and the other is a .png, but I've drawn arrows in my output where the surface should be reflecting shadows, but it's not.
A couple of things to check:
The visibility check in the inner for loop might be returning a false positive (i.e. it's calculating that all surfaces[k] are not closer to lights[j] than your intersection point, for some j). This would cause it to incorrectly add that light[j]'s contribution to your accumulator. This would result in missing shadows, but it ought to happen everywhere, including your top recursion level, whereas you're only seeing missing shadows in reflections.
There might an error in the color() method that's returning some wrong value that's then being incremented into accumulator. Although without seeing that code, it's hard to know for sure.
You're using postfix decrement on recursion_depth inside the materialToUse.IsReflective() check. Can you verify that the decremented value of recursion_depth is actually being passed to the shade() method call? (And if not, try changing to prefix decrement).
return this->shade(... recursion_depth--)...
EDIT: Can you also verify that recursion_depth is just a parameter to the shade() method, i.e. that there isn't a global / static recursion_depth anywhere. Assuming that there isn't (and there shouldn't be), you can change the call above to
return this->shade(... recursion_depth - 1)...
EDIT 2: A couple of other things to look at:
In color(), I don't understand why you're including the direction to the camera in your calculations. The color of intersections other than the first one, per pixel, ought to be independent of where the camera is. But I doubt that's the cause of this issue.
Verify that return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km); is doing the right thing with that matrix multiplication. Why are you multiplying by materialToUse.km?
Verify that materialToUse.km is constant per surface (i.e. it doesn't change over the geometry of the surface, the depth of iteration, or anything else).
Break up the statement return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km); into its component objects, so you can see the intermediate results in the debugger:
Vec reflectedColor = this->shade(accumulator, r, surfaces, lights, recursion_depth - 1);
Vec multipliedColor = reflectedColor.mult(materialToUse.km);
return multipliedColor;
Determine the image (x, y) coordinates of one of your problematic pixels. Set a conditional breakpoint that's triggered when rendering that pixel, and then step through your shade() method. Assuming you pick the pixel pointed to by the bottom right arrow in your example image, you ought to see one recursion into shade(). Stepping through that the first recurse, you'll see that your code is incorrectly adding the light contribution from the floor, when it should be in shadow.
To answer my own question: I was not checking that the t should be less than the distance from the intersection to light position.
Instead of:
if (t > EPSILON && t < closestIntersection) {
visible = false;
break;
}
it should be:
if (t > EPSILON && t < max_t) {
visible = false;
break;
}
where max_t is
double max_t = dirToLight.magnitude();
before dirToLight has been normalized.

Optimizing a Ray Tracer

I'm tasked with optimizing the following ray tracer:
void Scene::RayTrace()
{
for (int v = 0; v < fb->h; v++) // all vertical pixels in framebuffer
{
calculateFPS(); // calculates the current fps and prints it
for (int u = 0; u < fb->w; u++) // all horizontal pixels in framebuffer
{
fb->Set(u, v, 0xFFAAAAAA); // background color
fb->SetZ(u, v, FLT_MAX); // sets the Z values to all be maximum at beginning
V3 ray = (ppc->c + ppc->a*((float)u + .5f) + ppc->b*((float)v + .5f)).UnitVector(); // gets the camera ray
for (int tmi = 0; tmi < tmeshesN; tmi++) // iterates over all triangle meshes
{
if (!tmeshes[tmi]->enabled) // doesn't render a tmesh if it's not set to be enabled
continue;
for (int tri = 0; tri < tmeshes[tmi]->trisN; tri++) // iterates over all triangles in the mesh
{
V3 Vs[3]; // triangle vertices
Vs[0] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 0]];
Vs[1] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 1]];
Vs[2] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 2]];
V3 bgt = ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); // I don't entirely understand what this does
if (bgt[2] < 0.0f || bgt[0] < 0.0f || bgt[1] < 0.0f || bgt[0] + bgt[1] > 1.0f)
continue;
if (fb->zb[(fb->h - 1 - v)*fb->w + u] < bgt[2])
continue;
fb->SetZ(u, v, bgt[2]);
float alpha = 1.0f - bgt[0] - bgt[1];
float beta = bgt[0];
float gamma = bgt[1];
V3 Cs[3]; // triangle vertex colors
Cs[0] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 0]];
Cs[1] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 1]];
Cs[2] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 2]];
V3 color = Cs[0] * alpha + Cs[1] * beta + Cs[2] * gamma;
fb->Set(u, v, color.GetColor()); // sets this pixel accordingly
}
}
}
fb->redraw();
Fl::check();
}
}
Two things:
I don't entirely understand what ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); does. Can anyone explain this, in terms of ray-tracing, to me? Here is the function inside my "Planar Pinhole Camera" class (this function was given to me):
V3 V3::IntersectRayWithTriangleWithThisOrigin(V3 r, V3 Vs[3])
{
M33 m; // 3X3 matrix class
m.SetColumn(0, Vs[1] - Vs[0]);
m.SetColumn(1, Vs[2] - Vs[0]);
m.SetColumn(2, r*-1.0f);
V3 ret; // Vector3 class
V3 &C = *this;
ret = m.Inverse() * (C - Vs[0]);
return ret;
}
The basic steps of this are apparent, I just don't see what it's actually doing.
How would I go about optimizing this ray-tracer from here? I've found something online about "kd trees," but I'm unsure how complex they are. Does anyone have some good resources on simple solutions for optimizing this? I've had some difficulty deciphering what's out there.
Thanks!
Probably the largest optimisation by far would be to use some sort of bounding volume hierarchy. Right now the code intersects all rays with all triangles of all objects. With a BVH, we instead ask: "given this ray, which triangles intersect?" This means that for each ray, you generally only need to test for intersection with a handful of primitives and triangles, rather than every single triangle in the scene.
IntersectRayWithTriangleWithThisOrigin
from the look of it
it creates inverse transform matrix from the triangle edges (triangle basis vectors are X,Y)
do not get the Z axis I would expect the ray direction there and not position of pixel (ray origin)
but can be misinterpreting something
anyway the inverse matrix computation is the biggest problem
you are computing it for each triangle per pixel that is a lot
faster would be having computed inverse transform matrix of each triangle before raytracing (once)
where X,Y are the basis and Z is perpendicular to booth of them facing always the same direction to camera
and then just transform your ray into it and check for limits of intersection
that is just matrix*vector and few ifs instead of inverse matrix computation
another way would be to algebraically solve ray vs. plane intersection
that should lead to much simpler equation then matrix inversion
after that is that just a mater of basis vector bound checking

Intersection problems with ray-sphere intersection

I'm writing a simple ray tracer and to keep it simple for now I've decided to just have spheres in my scene. I am at a stage now where I merely want to confirm that my rays are intersecting a sphere in the scene properly, nothing else. I've created a Ray and Sphere class and then a function in my main file which goes through each pixel to see if there's an intersection (relevant code will be posted below). The problem is that the whole intersection with the sphere is acting rather strangely. If I create a sphere with center (0, 0, -20) and a radius of 1 then I get only one intersection which is always at the very first pixel of what would be my image (upper-left corner). Once I reach a radius of 15 I suddenly get three intersections in the upper-left region. A radius of 18 gives me six intersections and once I reach a radius of 20+ I suddenly get an intersection for EACH pixel so something is acting as it's not supposed to do.
I was suspicious that my ray-sphere intersection code might be at fault here but having looked through it and looked through the net for more information most solutions describe the very same approach I use so I assume it shouldn't(!) be at fault here. So...I am not exactly sure what I am doing wrong, it could be my intersection code or it could be something else causing the problems. I just can't seem to find it. Could it be that I am thinking wrong when giving values for the sphere and rays? Below is relevant code
Sphere class:
Sphere::Sphere(glm::vec3 center, float radius)
: m_center(center), m_radius(radius), m_radiusSquared(radius*radius)
{
}
//Sphere-ray intersection. Equation: (P-C)^2 - R^2 = 0, P = o+t*d
//(P-C)^2 - R^2 => (o+t*d-C)^2-R^2 => o^2+(td)^2+C^2+2td(o-C)-2oC-R^2
//=> at^2+bt+c, a = d*d, b = 2d(o-C), c = (o-C)^2-R^2
//o = ray origin, d = ray direction, C = sphere center, R = sphere radius
bool Sphere::intersection(Ray& ray) const
{
//Squared distance between ray origin and sphere center
float squaredDist = glm::dot(ray.origin()-m_center, ray.origin()-m_center);
//If the distance is less than the squared radius of the sphere...
if(squaredDist <= m_radiusSquared)
{
//Point is in sphere, consider as no intersection existing
//std::cout << "Point inside sphere..." << std::endl;
return false;
}
//Will hold solution to quadratic equation
float t0, t1;
//Calculating the coefficients of the quadratic equation
float a = glm::dot(ray.direction(),ray.direction()); // a = d*d
float b = 2.0f*glm::dot(ray.direction(),ray.origin()-m_center); // b = 2d(o-C)
float c = glm::dot(ray.origin()-m_center, ray.origin()-m_center) - m_radiusSquared; // c = (o-C)^2-R^2
//Calculate discriminant
float disc = (b*b)-(4.0f*a*c);
if(disc < 0) //If discriminant is negative no intersection happens
{
//std::cout << "No intersection with sphere..." << std::endl;
return false;
}
else //If discriminant is positive one or two intersections (two solutions) exists
{
float sqrt_disc = glm::sqrt(disc);
t0 = (-b - sqrt_disc) / (2.0f * a);
t1 = (-b + sqrt_disc) / (2.0f * a);
}
//If the second intersection has a negative value then the intersections
//happen behind the ray origin which is not considered. Otherwise t0 is
//the intersection to be considered
if(t1<0)
{
//std::cout << "No intersection with sphere..." << std::endl;
return false;
}
else
{
//std::cout << "Intersection with sphere..." << std::endl;
return true;
}
}
Program:
#include "Sphere.h"
#include "Ray.h"
void renderScene(const Sphere& s);
const int imageWidth = 400;
const int imageHeight = 400;
int main()
{
//Create sphere with center in (0, 0, -20) and with radius 10
Sphere testSphere(glm::vec3(0.0f, 0.0f, -20.0f), 10.0f);
renderScene(testSphere);
return 0;
}
//Shoots rays through each pixel and check if there's an intersection with
//a given sphere. If an intersection exists then the counter is increased.
void renderScene(const Sphere& s)
{
//Ray r(origin, direction)
Ray r(glm::vec3(0.0f), glm::vec3(0.0f));
//Will hold the total amount of intersections
int counter = 0;
//Loops through each pixel...
for(int y=0; y<imageHeight; y++)
{
for(int x=0; x<imageWidth; x++)
{
//Change ray direction for each pixel being processed
r.setDirection(glm::vec3(((x-imageWidth/2)/(float)imageWidth), ((imageHeight/2-y)/(float)imageHeight), -1.0f));
//If current ray intersects sphere...
if(s.intersection(r))
{
//Increase counter
counter++;
}
}
}
std::cout << counter << std::endl;
}
Your second solution (t1) to the quadratic equation is wrong in the case disc > 0, where you need something like:
float sqrt_disc = glm::sqrt(disc);
t0 = (-b - sqrt_disc) / (2 * a);
t1 = (-b + sqrt_disc) / (2 * a);
I think it's best to write out the equation in this form rather than turning the division by 2 into a multiplication by 0.5, because the more the code resembles the mathematics, the easier it is to check.
A few other minor comments:
It seemed confusing to re-use the name disc for sqrt(disc), so I used a new variable name above.
You don't need to test for t0 > t1, since you know that both a and sqrt_disc are positive, and so t1 is always greater than t0.
If the ray origin is inside the sphere, it's possible for t0 to be negative and t1 to be positive. You don't seem to handle this case.
You don't need a special case for disc == 0, as the general case computes the same values as the special case. (And the fewer special cases you have, the easier it is to check your code.)
If I understand your code correctly, you might want to try:
r.setDirection(glm::vec3(((x-imageWidth/2)/(float)imageWidth),
((imageHeight/2-y)/(float)imageHeight),
-1.0f));
Right now, you've positioned the camera one unit away from the screen, but the rays can shoot as much as 400 units to the right and down. This is a very broad field of view. Also, your rays are only sweeping one octent of space. This is why you only get a handful of pixels in the upper-left corner of the screen. The code I wrote above should rectify that.

OpenGL Calculating Normals (Quads)

My issue is regarding OpenGL, and Normals, I understand the math behind them, and I am having some success.
The function I've attached below accepts an interleaved Vertex Array, and calculates the normals for every 4 vertices. These represent QUADS that having the same directions. By my understanding these 4 vertices should share the same Normal. So long as they face the same way.
The problem I am having is that my QUADS are rendering with a diagonal gradient, much like this: Light Effect - Except that the shadow is in the middle, with the light in the corners.
I draw my QUADS in a consistent fashion. TopLeft, TopRight, BottomRight, BottomLeft, and the vertices I use to calculate my normals are TopRight - TopLeft, and BottomRight - TopLeft.
Hopefully someone can see something I've made a blunder on, but I have been at this for hours to no prevail.
For the record I render a Cube, and a Teapot next to my objects to check my lighting is functioning, so I'm fairly sure there is no issue regarding Light position.
void CalculateNormals(point8 toCalc[], int toCalcLength)
{
GLfloat N[3], U[3], V[3];//N will be our final calculated normal, U and V will be the subjects of cross-product
float length;
for (int i = 0; i < toCalcLength; i+=4) //Starting with every first corner QUAD vertice
{
U[0] = toCalc[i+1][5] - toCalc[i][5]; U[1] = toCalc[i+1][6] - toCalc[i][6]; U[2] = toCalc[i+1][7] - toCalc[i][7]; //Calculate Ux Uy Uz
V[0] = toCalc[i+3][5] - toCalc[i][5]; V[1] = toCalc[i+3][6] - toCalc[i][6]; V[2] = toCalc[i+3][7] - toCalc[i][7]; //Calculate Vx Vy Vz
N[0] = (U[1]*V[2]) - (U[2] * V[1]);
N[1] = (U[2]*V[0]) - (U[0] * V[2]);
N[2] = (U[0]*V[1]) - (U[1] * V[0]);
//Calculate length for normalising
length = (float)sqrt((pow(N[0],2)) + (pow(N[1],2)) + (pow(N[2],2)));
for (int a = 0; a < 3; a++)
{
N[a]/=length;
}
for (int j = 0; i < 4; i++)
{
//Apply normals to QUAD vertices (3,4,5 index position of normals in interleaved array)
toCalc[i+j][3] = N[0]; toCalc[i+j][4] = N[1]; toCalc[i+j][5] = N[2];
}
}
}
It seems like you are taking the vertex position values for use in calculations from indices 5, 6, and 7, and then writing out the normals at indices 3, 4, and 5. Note how index 5 is used on both. I suppose one of them is not correct.
It looks like your for-loops are biting you.
for (int i = 0; i < toCalcLength; i+=4) //Starting with every first corner QUAD vertice
{
...
for (int j = 0; i < 4; i++)
{ // ^ ^
// Should you be using 'j' instead of 'i' here?
// j will never increment
// This loop won't be called at all after the first time through the outer loop
...
}
}
You use indexes 3, 4, and 5 for storing normal:
toCalc[i+j][3] = N[0]; toCalc[i+j][4] = N[1]; toCalc[i+j][5] = N[2];
AND you use indexes 5, 6 and 7 to get point coordinates:
U[0] = toCalc[i+1][5] - toCalc[i][5]; U[1] = toCalc[i+1][6] - toCalc[i][6]; U[2] = toCalc[i+1][7] - toCalc[i][7];
Those indexes overlap (normal.x shares same index as position.z), which shouldn't be happening.
Recommendations:
Put everything into structures.
Either:
Use math library.
OR put vector arithmetics into separate appropriately named subroutines.
Use named variables instead of indexes.
By doing so you'll reduce number of bugs in your code. a.position.x is easier to read than quad[0][5], and it is easier to fix a typo in vector operation when the code hasn't been copy-pasted.
You can use unions to access vector components by both index and name:
struct Vector3{
union{
struct{
float x, y, z;
};
float v[3];
};
};
For calcualting normal in quad ABCD
A--B
| |
C--D
Use formula:
normal = normalize((B.position - A.position) X (C.position - A.position)).
OR
normal = normalize((D.position - A.position) X (C.position - B.position)).
Where "X" means "cross-product".
Either way will work fine.

How do I use texture-mapping in a simple ray tracer?

I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading.
first image http://dl.dropbox.com/u/367232/Texture.jpg
The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map.
Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour.
bmp http://dl.dropbox.com/u/367232/vPoolWater.bmp
When this is done, it simply appears as this:
second image http://dl.dropbox.com/u/367232/texture2.jpg
Here are a few code snippets:
Color getColor(const Object *object,const Ray *ray, float *t)
{
if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) {
float distance = *t;
Point pnt = ray->origin + ray->direction * distance;
Point oc = object->center;
Vector ve = Point(oc.x,oc.y,oc.z+1) - oc;
Normalize(&ve);
Vector vn = Point(oc.x,oc.y+1,oc.z) - oc;
Normalize(&vn);
Vector vp = pnt - oc;
Normalize(&vp);
double phi = acos(-vn.dot(vp));
float v = phi / M_PI;
float u;
float num1 = (float)acos(vp.dot(ve));
float num = (num1 /(float) sin(phi));
float theta = num /(float) (2 * M_PI);
if (theta < 0 || theta == NAN) {theta = 0;}
if (vn.cross(ve).dot(vp) > 0) {
u = theta;
}
else {
u = 1 - theta;
}
int x = (u * IMAGE_WIDTH) -1;
int y = (v * IMAGE_WIDTH) -1;
int p = (y * IMAGE_WIDTH + x)*3;
return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]);
}
else {
return object->color;
}
};
I call the colour code here in Trace:
if (object->materialType == MATTE)
return getColor(object, ray, &t);
Ray shadowRay;
int isInShadow = 0;
shadowRay.origin.x = pHit.x + nHit.x * bias;
shadowRay.origin.y = pHit.y + nHit.y * bias;
shadowRay.origin.z = pHit.z + nHit.z * bias;
shadowRay.direction = light->object->center - pHit;
float len = shadowRay.direction.length();
Normalize(&shadowRay.direction);
float LdotN = shadowRay.direction.dot(nHit);
if (LdotN < 0)
return 0;
Color lightColor = light->object->color;
for (int k = 0; k < numObjects; k++) {
if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) {
if (objects[k]->materialType == GLASS)
lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color
else
isInShadow = 1;
break;
}
}
lightColor *= 1.f/(len*len);
return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN;
}
I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image.
Thanks.
It could be that the texture is just washed out because the light is so bright and so close. Notice how in the solid red case, there doesn't seem to be any gradation around the sphere. The red looks like it's saturated.
Your u,v mapping looks right, but there could be a mistake there. I'd add some assert statements to make sure u and v and really between 0 and 1 and that the p index into your TEXT_DATA array is also within range.
If you're debugging your textures, you should use a constant material whose color is determined only by the texture and not the lights. That way you can make sure you are correctly mapping your texture to your primitive and filtering it properly before doing any lighting on it. Then you know that part isn't the problem.