Wierd Raytracing Artifacts - c++

I am trying to create a ray tracer using Qt, but I have some really weird artifacts going on.
Before I implemented shading, I just had 4 spheres, 3 triangles and 2 bounded planes in my scene. They all showed up as expected and as the color expected however, for my planes, I would see dots the same color as the background. These dots would stay static from my view position, so if I moved the camera around the dots would move around as well. However they only affected the planes and triangles and would never appear on the spheres.
One I implemented shading the issue got worse. The dots now also appear on spheres in the light source, so any part affected by the diffuse.
Also, my one plane of pure blue (RGB 0,0,255) has gone straight black. Since I have two planes I switched their colors and again the blue one went black, so it's a color issue and not a plane issue.
If anyone has any suggestions as to what the problem could be or wants to see any particular code let me know.
#include "plane.h"
#include "intersection.h"
#include <math.h>
#include <iostream>
Plane::Plane(QVector3D bottomLeftVertex, QVector3D topRightVertex, QVector3D normal, QVector3D point, Material *material)
{
minCoords_.setX(qMin(bottomLeftVertex.x(),topRightVertex.x()));
minCoords_.setY(qMin(bottomLeftVertex.y(),topRightVertex.y()));
minCoords_.setZ(qMin(bottomLeftVertex.z(),topRightVertex.z()));
maxCoords_.setX(qMax(bottomLeftVertex.x(),topRightVertex.x()));
maxCoords_.setY(qMax(bottomLeftVertex.y(),topRightVertex.y()));
maxCoords_.setZ(qMax(bottomLeftVertex.z(),topRightVertex.z()));
normal_ = normal;
normal_.normalize();
point_ = point;
material_ = material;
}
Plane::~Plane()
{
}
void Plane::intersect(QVector3D rayOrigin, QVector3D rayDirection, Intersection* result)
{
if(normal_ == QVector3D(0,0,0)) //plane is degenerate
{
cout << "degenerate plane" << endl;
return;
}
float minT;
//t = -Normal*(Origin-Point) / Normal*direction
float numerator = (-1)*QVector3D::dotProduct(normal_, (rayOrigin - point_));
float denominator = QVector3D::dotProduct(normal_, rayDirection);
if (fabs(denominator) < 0.0000001) //plane orthogonal to view
{
return;
}
minT = numerator / denominator;
if (minT < 0.0)
{
return;
}
QVector3D intersectPoint = rayOrigin + (rayDirection * minT);
//check inside plane dimensions
if(intersectPoint.x() < minCoords_.x() || intersectPoint.x() > maxCoords_.x() ||
intersectPoint.y() < minCoords_.y() || intersectPoint.y() > maxCoords_.y() ||
intersectPoint.z() < minCoords_.z() || intersectPoint.z() > maxCoords_.z())
{
return;
}
//only update if closest object
if(result->distance_ > minT)
{
result->hit_ = true;
result->intersectPoint_ = intersectPoint;
result->normalAtIntersect_ = normal_;
result->distance_ = minT;
result->material_ = material_;
}
}
QVector3D MainWindow::traceRay(QVector3D rayOrigin, QVector3D rayDirection, int depth)
{
if(depth > maxDepth)
{
return backgroundColour;
}
Intersection* rayResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayOrigin, rayDirection, rayResult);
}
if(rayResult->hit_ == false)
{
return backgroundColour;
}
else
{
QVector3D intensity = QVector3D(0,0,0);
QVector3D shadowRay = pointLight - rayResult->intersectPoint_;
shadowRay.normalize();
Intersection* shadowResult = new Intersection();
foreach (Shape* shape, shapeList)
{
shape->intersect(rayResult->intersectPoint_, shadowRay, shadowResult);
}
if(shadowResult->hit_ == true)
{
intensity += shadowResult->material_->diffuse_ * intensityAmbient;
}
else
{
intensity += rayResult->material_->ambient_ * intensityAmbient;
// Diffuse
intensity += rayResult->material_->diffuse_ * intensityLight * qMax(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay), 0.0f);
// Specular
QVector3D R = ((2*(QVector3D::dotProduct(rayResult->normalAtIntersect_,shadowRay))* rayResult->normalAtIntersect_) - shadowRay);
R.normalize();
QVector3D V = rayOrigin - rayResult->intersectPoint_;
V.normalize();
intensity += rayResult->material_->specular_ * intensityLight * pow(qMax(QVector3D::dotProduct(R,V), 0.0f), rayResult->material_->specularExponent_);
}
return intensity;
}
}

So I figured out my issues. They are due to float being terrible at precision, any check for < 0.0 would intermittently fail because of floats precision. I had to add an offset to all my checks so that I was checking for < 0.001.

Related

RayTracing, refraction code produces weird results

This is the relevant code for refraction. I'm getting weird results when I change the index of refraction. Ideally, the sphere is supposed to show the distorted purple sphere and background due to refraction but instead it appears to look really like there's a red sphere inside the transparent one.
if(depth < MaxRecursion) {
if(shape[pos]->transparent() == 1) {
vec normal = N;
// etam = index or refrac of medium, etao = index of outside;
float eta, etao = 1.0, etam = 1.1
float cosi = dot(N, unit_vector(r.dir());
if(cosi < 0.0) {
cosi = -cosi;
}
else {
normal = -normal;
swap(etao, etam);
}
eta = etao/etam;
float c2 = 1.0 - eta*eta*(1.0 - cosi * cosi);
if(c2 > 0.0) {
vec dir = unit_vector(eta*unit_vector(r.dir()) - (eta*cosi -
sqrtf(c2))*normal);
ray refraction(r.p_at_par(t_near) + normal*1e-4, dir);
total += color(refraction, shape, lighting, depth + 1);
// color function is the current function
}
else { // total internal reflection
ray reflect = reflection(unit_vector(r.dir()), normal, r.p_at_par(t_near));
total += color(reflect, shape, lighting, depth + 1);
}
}
}
and for higher indexes of refraction, I get a completely white sphere (etam = 1.9)
I know than the error lies in this part of the code alone, because everything else works fine without the refraction code (i.e reflection, shadow, etc.)

Ray transformation in a Ray - OBB intersection test

I've implemented an algorithm that tests for a Ray - AABB intersection and it works fine. But when I try to transform Ray to the AABB's local space (making this a Ray - OBB test), I can't get correct results. I've studied several forums and other resources, but still missing something. (Some sources suggesting to apply inverted transformation to the ray origin and its end, and only then calc. direction, other - to apply transformation to origin and direction). Can someone point in the right direction (no pun intended)?
Here goes two functions responsible for the math:
1) Calculating inverses and other things to perform tests
bool Ray::intersectsMesh(const Mesh& mesh, const Transformation& transform) {
float largestNearIntersection = std::numeric_limits<float>::min();
float smallestFarIntersection = std::numeric_limits<float>::max();
glm::mat4 modelTransformMatrix = transform.modelMatrix();
Box boundingBox = mesh.boundingBox();
glm::mat4 inverse = glm::inverse(transform.modelMatrix());
glm::vec4 newOrigin = inverse * glm::vec4(mOrigin, 1.0);
newOrigin /= newOrigin.w;
mOrigin = newOrigin;
mDirection = glm::normalize(inverse * glm::vec4(mDirection, 0.0));
glm::vec3 xAxis = glm::vec3(glm::column(modelTransformMatrix, 0));
glm::vec3 yAxis = glm::vec3(glm::column(modelTransformMatrix, 1));
glm::vec3 zAxis = glm::vec3(glm::column(modelTransformMatrix, 2));
glm::vec3 OBBTranslation = glm::vec3(glm::column(modelTransformMatrix, 3));
printf("trans x %f y %f z %f\n", OBBTranslation.x, OBBTranslation.y, OBBTranslation.z);
glm::vec3 delta = OBBTranslation - mOrigin;
bool earlyFalseReturn = false;
calculateIntersectionDistances(xAxis, delta, boundingBox.min.x, boundingBox.max.x, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
calculateIntersectionDistances(yAxis, delta, boundingBox.min.y, boundingBox.max.y, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
calculateIntersectionDistances(zAxis, delta, boundingBox.min.z, boundingBox.max.z, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
return true;
}
2) Helper function (probably not needed here as its relates only to AABB tests and works fine)
void Ray::calculateIntersectionDistances(const glm::vec3& axis,
const glm::vec3& delta,
float minPointOnAxis,
float maxPointOnAxis,
float *largestNearIntersection,
float *smallestFarIntersection,
bool *earlyFalseRerutn)
{
float divident = glm::dot(axis, delta);
float denominator = glm::dot(mDirection, axis);
if (fabs(denominator) > 0.001f) {
float t1 = (divident + minPointOnAxis) / denominator;
float t2 = (divident + maxPointOnAxis) / denominator;
if (t1 > t2) { std::swap(t1, t2); }
*smallestFarIntersection = std::min(t2, *smallestFarIntersection);
*largestNearIntersection = std::max(t1, *largestNearIntersection);
} else if (-divident + minPointOnAxis > 0.0 || -divident + maxPointOnAxis < 0.0) {
*earlyFalseRerutn = true;
}
}
As it turned out, the ray's world -> model transformation was correct. The bug was in the intersection test. I had to completely replace the intersection code, because I wasn't able to identify the bug in the old code, unfortunately.
Ray transformation code:
glm::mat4 inverse = glm::inverse(transform.modelMatrix());
glm::vec4 start = inverse * glm::vec4(mOrigin, 1.0);
glm::vec4 direction = inverse * glm::vec4(mDirection, 0.0);
direction = glm::normalize(direction);
And the Ray - AABB test was stolen from here

Refraction in Raytracing?

I've been working on my raytracer again. I added reflection and multithreading support. Currently I am working on adding refractions, but its only half working.
As you can see, there is a center sphere(without specular highlight), a reflecting sphere(to the right) and a refracting sphere(left). I'm pretty happy about reflections, it does look very good. For refractions its kinda working...the light is refracted and all shadows of the spheres are visible in the sphere(refraction index 1.4), but there is an outer black ring.
EDIT: Apparently the black ring gets bigger, and therefore the sphere smaller, when I increase the refraction index of the sphere. On the contrary, when decreasing the index of refraction, the Sphere gets larger and the black ring smaller...until, with index of refraction set to one, the ring totally disappears.
IOR = 1.9
IOR = 1.1
IOR = 1.00001
And interestingly enough at IOR = 1 the sphere loses its transparency and becomes white.
I think I covered total internal reflection and it is not the issue here.
Now the code:
I'm using the operator | for dot product, so (vec|vec) is a dot product and the operator ~ to invert vectors. The objects, both ligths and spheres are stored in Object **objects;.
Raytrace function
Colour raytrace(const Ray &r, const int &depth)
{
//first find the nearest intersection of a ray with an object
Colour finalColour = skyBlue *(r.getDirection()|Vector(0,0,-1)) * SKY_FACTOR;
double t, t_min = INFINITY;
int index_nearObj = -1;
for(int i = 0; i < objSize; i++)
{
if(!dynamic_cast<Light *>(objects[i]))//skip light src
{
t = objects[i]->findParam(r);
if(t > 0 && t < t_min)
{
t_min = t;
index_nearObj = i;
}
}
}
//no intersection
if(index_nearObj < 0)
return finalColour;
Vector intersect = r.getOrigin() + r.getDirection()*t_min;
Vector normal = objects[index_nearObj]->NormalAtIntersect(intersect);
Colour objectColor = objects[index_nearObj]->getColor();
Ray rRefl, rRefr; //reflected and refracted Ray
Colour refl = finalColour, refr = finalColour; //reflected and refracted colours
double reflectance = 0, transmittance = 0;
if(objects[index_nearObj]->isReflective() && depth < MAX_TRACE_DEPTH)
{
//handle reflection
rRefl = objects[index_nearObj]->calcReflectingRay(r, intersect, normal);
refl = raytrace(rRefl, depth + 1);
reflectance = 1;
}
if(objects[index_nearObj]->isRefractive() && depth < MAX_TRACE_DEPTH)
{
//handle transmission
rRefr = objects[index_nearObj]->calcRefractingRay(r, intersect, normal, reflectance, transmittance);
refr = raytrace(rRefr, depth + 1);
}
Ray rShadow; //shadow ray
bool shadowed;
double t_light = -1;
Colour localColour;
Vector tmpv;
//get material properties
double ka = 0.2; //ambient coefficient
double kd; //diffuse coefficient
double ks; //specular coefficient
Colour ambient = ka * objectColor; //ambient component
Colour diffuse, specular;
double brightness;
localColour = ambient;
//look if the object is in shadow or light
//do this by casting a ray from the obj and
// check if there is an intersection with another obj
for(int i = 0; i < objSize; i++)
{
if(dynamic_cast<Light *>(objects[i])) //if object is a light
{
//for each light
shadowed = false;
//create Ray to light
tmpv = objects[i]->getPosition() - intersect;
rShadow = Ray(intersect + (!tmpv) * BIAS, tmpv);
t_light = objects[i]->findParam(rShadow);
if(t_light < 0) //no imtersect, which is quite impossible
continue;
//then we check if that Ray intersects one object that is not a light
for(int j = 0; j < objSize; j++)
{
if(!dynamic_cast<Light *>(objects[j]) && j != index_nearObj)//if obj is not a light
{
t = objects[j]->findParam(rShadow);
//if it is smaller we know the light is behind the object
//--> shadowed by this light
if (t >= 0 && t < t_light)
{
// Set the flag and stop the cycle
shadowed = true;
break;
}
}
}
if(!shadowed)
{
rRefl = objects[index_nearObj]->calcReflectingRay(rShadow, intersect, normal);
//reflected ray from ligh src, for ks
kd = maximum(0.0, (normal|rShadow.getDirection()));
if(objects[index_nearObj]->getShiny() <= 0)
ks = 0;
else
ks = pow(maximum(0.0, (r.getDirection()|rRefl.getDirection())), objects[index_nearObj]->getShiny());
diffuse = kd * objectColor;// * objects[i]->getColour();
specular = ks * objects[i]->getColor();
brightness = 1 /(1 + t_light * DISTANCE_DEPENDENCY_LIGHT);
localColour += brightness * (diffuse + specular);
}
}
}
finalColour = localColour + (transmittance * refr + reflectance * refl);
return finalColour;
}
Now the function that calculates the refracted Ray, I used several different sites for resource, and each had similar algorithms. This is the best I could do so far. It may just be a tiny detail I'm not seeing...
Ray Sphere::calcRefractingRay(const Ray &r, const Vector &intersection,Vector &normal, double & refl, double &trans)const
{
double n1, n2, n;
double cosI = (r.getDirection()|normal);
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;//invert
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
n = n1/n2;
double sinT2 = n*n * (1.0 - cosI * cosI);
double cosT = sqrt(1.0 - sinT2);
//fresnel equations
double rn = (n1 * cosI - n2 * cosT)/(n1 * cosI + n2 * cosT);
double rt = (n2 * cosI - n1 * cosT)/(n2 * cosI + n2 * cosT);
rn *= rn;
rt *= rt;
refl = (rn + rt)*0.5;
trans = 1.0 - refl;
if(n == 1.0)
return r;
if(cosT*cosT < 0.0)//tot inner refl
{
refl = 1;
trans = 0;
return calcReflectingRay(r, intersection, normal);
}
Vector dir = n * r.getDirection() + (n * cosI - cosT)*normal;
return Ray(intersection + dir * BIAS, dir);
}
EDIT: I also changed the refraction index around.From
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
to
if(cosI > 0.0)
{
n1 = getRefrIndex();
n2 = 1.0;
normal = ~normal;
}
else
{
n1 = 1.0;
n2 = getRefrIndex();
cosI = -cosI;
}
Then I get this, and almost the same(still upside down) with an index of refraction at 1!
And the reflection calculation:
Ray Sphere::calcReflectingRay(const Ray &r, const Vector &intersection, const Vector &normal)const
{
Vector rdir = r.getDirection();
Vector dir = rdir - 2 * (rdir|normal) * normal;
return Ray(intersection + dir*BIAS, dir);
//the Ray constructor automatically normalizes directions
}
So my question is: How do I fix the outer black circle? Which version is correct?
Help is greatly appreciated :)
This is compiled on Linux using g++ 4.8.2.
Warning: the following is a guess, not a certainty. I'd have to look at the code in more detail to be sure what's happening and why.
That said, it looks to me like your original code is basically simulating a concave lens instead of convex.
A convex lens is basically a magnifying lens, bringing light rays from a relatively small area into focus on a plane:
This also shows why the corrected code shows an upside-down image. The rays of light coming from the top on one side get projected to the bottom on the other (and vice versa).
Getting back to the concave lens though: a concave lens is a reducing lens that shows a wide angle of picture from in front of the lens:
If you look at the bottom right corner here, it shows what I suspect is the problem: especially with a high index of refraction, the rays of light trying to come into the lens intersect the edge of the lens itself. For all the angles wider than that, you're typically going to see a black ring, because the front edge of the lens is acting as a shade to prevent light from entering.
Increasing the index of refraction increases the width of that black ring, because the light is bent more, so a larger portion at the edges is intersecting the outer edge of the lens.
In case you care about how they avoid this with things like wide-angle camera lenses, the usual route is to use a meniscus lens, at least for the front element:
This isn't a panacea, but does at least prevent incoming light rays from intersecting the outer edge of the front lens element. Depending on exactly how wide an angle the lens needs to cover, it'll often be quite a bit less radical of a meniscus than this (and in some cases it'll be a plano-concave) but you get the general idea.
Final warning: of course, all of these are hand-drawn, and intended only to give general idea, not (for example) reflect the design of any particular lens, an element with any particular index of refraction, etc.
I stumbled across this exact issue as well when working on a ray tracer. #lightxbulb's comment about normalizing the ray direction vector fixed this problem for me.
Firstly, keep your code that computes the refraction indices prior to your edit. In other words, you should be seeing those black rings in your renderings.
Then, in your calcRefractingRay function where you compute cosI, use the dot product of normalize(r.getDirection()) and normal. Currently you're taking the dot product of r.getDirection() and normal.
Secondly, when you compute the refracted ray direction dir, use normalize(r.getDirection()) instead of r.getDirection(). Again, you're currently using
r.getDirection() in your calculation.
Also, there is an issue with the way you're checking for total internal reflection. You should check that the term you're taking the square root of (1.0 - sinT2) is non-negative before actually computing the square root.
Hope that helps!

C++ OpenGL: Ray Trace Shading Isn't Properly Shading

I'm a CS student and for our final we were told to construct the reflections on multiple spheres via ray tracing. That's almost literally what we got for directions except a picture for how it should look when finished. So I need spheres, with they're reflections (using ray tracing) mapped on them with the proper shading from a light.
Well I have all of it working, except having multiple spheres and the fact that it doesn't look like the picture he gave us for a rubric.
The multiple spheres thing I'm not too sure how to do, but I'd say I need to store them in a 2D array and modify a few sections of code.
What I thought was modifying the sphere_intersect and find_reflect to include which sphere is being analyzed. Next, modify find_reflect so that when the new vector u is calculated its starting point (P0) is also updated. Then if the ray hits a sphere it will have to count how many times the ray has been reflected. At some point terminate (after 10 times maybe) and then I'll just draw the pixel. For an added touch I'd like to add solid colors to the spheres which would call for finding the normal of a sphere I believe.
Anyways I'm going to attach a picture of his, a picture of mine, and the source code. Hopefully someone can help me out on this one.
Thanks in advance!
Professor's spheres
My spheres
#include "stdafx.h"
#include <stdio.h>
#include <stdlib.h>
#include <GL/glut.h>
#include <math.h>
#include <string>
#define screen_width 750
#define screen_height 750
#define true 1
#define false 0
#define perpendicular 0
int gridXsize = 20;
int gridZsize = 20;
float plane[] = {0.0, 1.0, 0.0, -50.0,};
float sphere[] = {250.0, 270.0, -100.0, 100.0};
float eye[] = {0.0, 400.0, 550.0};
float light[] = {250.0, 550.0, -200.0};
float dot(float *u, float *v)
{
return u[0]*v[0] + u[1]*v[1] + u[2]*v[2];
}
void norm(float *u)
{
float norm = sqrt(abs(dot(u,u)));
for (int i =0; i <3; i++)
{
u[i] = u[i]/norm;
}
}
float plane_intersect(float *u, float *pO)
{
float normt[3] = {plane[0], plane[1], plane[2]};
float s;
if (dot(u,normt) == 0)
{
s = -10;
}
else
{
s = (plane[3]-(dot(pO,normt)))/(dot(u,normt));
}
return s;
}
float sphere_intersect(float *u, float *pO)
{
float deltaP[3] = {sphere[0]-pO[0],sphere[1]-pO[1],sphere[2]-pO[2]};
float deltLen = sqrt(abs(dot(deltaP,deltaP)));
float t=0;
float answer;
float det;
if ((det =(abs(dot(u,deltaP)*dot(u,deltaP))- (deltLen*deltLen)+sphere[3]*sphere[3])) < 0)
{
answer = -10;
}
else
{
t =-1*dot(u,deltaP)- sqrt(det) ;
if (t>0)
{
answer = t;
}
else
{
answer = -10;
}
}
return answer;
}
void find_reflect(float *u, float s, float *pO)
{
float n[3] = {pO[0]+s *u[0]-sphere[0],pO[1]+s *u[1]-sphere[1],pO[2]+s *u[2]- sphere[2]};
float l[3] = {s *u[0],s *u[1],s *u[2]};
u[0] =(2*dot(l,n)*n[0])-l[0];
u[1] = (2*dot(l,n)*n[1])-l[1];
u[2] = (2*dot(l,n)*n[2])-l[2];
}
float find_shade(float *u,float s, float *pO)
{
float answer;
float lightVec[3] = {light[0]-(pO[0]+s *u[0]), light[1]-(pO[1]+s *u[1]), light[2]-(pO[2]+s *u[2])};
float n[3] = {pO[0]+s *u[0]-sphere[0],pO[1]+s *u[1]-sphere[1],pO[2]+s *u[2]-sphere[2]};
answer = -1*dot(lightVec,n)/(sqrt(abs(dot(lightVec,lightVec)))*sqrt(abs(dot(n,n))));
return answer;
}
void init()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,screen_width,0,screen_height);
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
for (int i=0; i < screen_width; i++)
{
for (int j=0; j < screen_height; j++)
{
float ray[3] = {1*(eye[0]-i),-1*(eye[1]-j),1*eye[2]};
float point[3] = {i,j,0};
norm(ray);
int plotted = false;
while (!plotted)
{
float s_plane = plane_intersect(ray, point);
float s_sphere = sphere_intersect(ray, point);
if (s_plane <= 0 && s_sphere <=0)
{
glColor3f(0,0,0);
glBegin(GL_POINTS);
glVertex3f(i,j,0);
glEnd();
plotted = true;
}
else if (s_sphere >= 0 && (s_plane <=0 || s_sphere <= s_plane))
{
find_reflect(ray, s_sphere, point);
}
else if (s_plane >=0 && (s_sphere <=0 ||s_plane <= s_sphere))
{
float shade = find_shade(ray, s_plane, point);
float xx = s_plane*ray[0] + eye[0];
float z = s_plane*ray[2] + eye[2];
if (abs((int)xx/gridXsize)%2 == abs((int)z/gridZsize)%2)
{
glColor3f(shade,0,0);
}
else
{
glColor3f(shade,shade,shade);
}
glBegin(GL_POINTS);
glVertex3f(i,j,0);
glEnd();
plotted = true;
}
}
}
}
glFlush();
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutCreateWindow("Ray Trace with Sphere.");
glutInitWindowSize(screen_width,screen_height);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutDisplayFunc(display);
init();
glutMainLoop();
return 0;
}
The professor did not tell you too much, because such a topic is covered thousands of time over the web, just check-out "Whitted Raytracing" ;) It's homework, and 5mn of googling around would solve the issue... Some clues to help without doing your homework for you
Do it step by step, don't try to reproduce the picture in one step
Get one sphere working, if hit the plane green pixel, the sphere red pixel, nothing, black. It's enough to get the intersections computing right. It looks like, from your picture, that you don't have the intersections right, for a start
Same as previous, with several spheres. Same as one sphere : check intersection for all objects, keep the closest intersection from the point of view.
Same as previous, but also compute the amount of light received for each intersection found, to have shade of red for spheres, and shade of green for the plane. (hint: dot product ^^)
Texture for the plane
Reflection for the spheres. Protip: a mirror don't reflect 100% of the light, just a fraction of it.

How do I use texture-mapping in a simple ray tracer?

I am attempting to add features to a ray tracer in C++. Namely, I am trying to add texture mapping to the spheres. For simplicity, I am using an array to store the texture data. I obtained the texture data by using a hex editor and copying the correct byte values into an array in my code. This was just for my testing purposes. When the values of this array correspond to an image that is simply red, it appears to work close to what is expected except there is no shading.
first image http://dl.dropbox.com/u/367232/Texture.jpg
The bottom right of the image shows what a correct sphere should look like. This sphere's colour using one set colour, not a texture map.
Another problem is that when the texture map is of something other than just one colour pixels, it turns white. My test image is a picture of water, and when it maps, it shows only one ring of bluish pixels surrounding the white colour.
bmp http://dl.dropbox.com/u/367232/vPoolWater.bmp
When this is done, it simply appears as this:
second image http://dl.dropbox.com/u/367232/texture2.jpg
Here are a few code snippets:
Color getColor(const Object *object,const Ray *ray, float *t)
{
if (object->materialType == TEXTDIF || object->materialType == TEXTMATTE) {
float distance = *t;
Point pnt = ray->origin + ray->direction * distance;
Point oc = object->center;
Vector ve = Point(oc.x,oc.y,oc.z+1) - oc;
Normalize(&ve);
Vector vn = Point(oc.x,oc.y+1,oc.z) - oc;
Normalize(&vn);
Vector vp = pnt - oc;
Normalize(&vp);
double phi = acos(-vn.dot(vp));
float v = phi / M_PI;
float u;
float num1 = (float)acos(vp.dot(ve));
float num = (num1 /(float) sin(phi));
float theta = num /(float) (2 * M_PI);
if (theta < 0 || theta == NAN) {theta = 0;}
if (vn.cross(ve).dot(vp) > 0) {
u = theta;
}
else {
u = 1 - theta;
}
int x = (u * IMAGE_WIDTH) -1;
int y = (v * IMAGE_WIDTH) -1;
int p = (y * IMAGE_WIDTH + x)*3;
return Color(TEXT_DATA[p+2],TEXT_DATA[p+1],TEXT_DATA[p]);
}
else {
return object->color;
}
};
I call the colour code here in Trace:
if (object->materialType == MATTE)
return getColor(object, ray, &t);
Ray shadowRay;
int isInShadow = 0;
shadowRay.origin.x = pHit.x + nHit.x * bias;
shadowRay.origin.y = pHit.y + nHit.y * bias;
shadowRay.origin.z = pHit.z + nHit.z * bias;
shadowRay.direction = light->object->center - pHit;
float len = shadowRay.direction.length();
Normalize(&shadowRay.direction);
float LdotN = shadowRay.direction.dot(nHit);
if (LdotN < 0)
return 0;
Color lightColor = light->object->color;
for (int k = 0; k < numObjects; k++) {
if (Intersect(objects[k], &shadowRay, &t) && !objects[k]->isLight) {
if (objects[k]->materialType == GLASS)
lightColor *= getColor(objects[k], &shadowRay, &t); // attenuate light color by glass color
else
isInShadow = 1;
break;
}
}
lightColor *= 1.f/(len*len);
return (isInShadow) ? 0 : getColor(object, &shadowRay, &t) * lightColor * LdotN;
}
I left out the rest of the code as to not bog down the post, but it can be seen here. Any help is greatly appreciated. The only portion not included in the code, is where I define the texture data, which as I said, is simply taken straight from a bitmap file of the above image.
Thanks.
It could be that the texture is just washed out because the light is so bright and so close. Notice how in the solid red case, there doesn't seem to be any gradation around the sphere. The red looks like it's saturated.
Your u,v mapping looks right, but there could be a mistake there. I'd add some assert statements to make sure u and v and really between 0 and 1 and that the p index into your TEXT_DATA array is also within range.
If you're debugging your textures, you should use a constant material whose color is determined only by the texture and not the lights. That way you can make sure you are correctly mapping your texture to your primitive and filtering it properly before doing any lighting on it. Then you know that part isn't the problem.