Where is my kd tree traversal code wrong? - c++

I was optimizing my c++ raytracer. I'm tracing single rays through kdtrees. So far I was using Havran's recursive algorithm 'B', which seems antique and overblown for OOP. My new code is as short as possible (and hopefully more easily optimized by the compiler AND more easily maintained):
struct StackElement{
KDTreeNode<PT>* node;
float tmax;
array<float, 3> origin;
};
//initializing explicit stack
stack<StackElement> mystack;
//initialize local variables
KDTreeNode<PT>* node = tree.root;
array<float, 3> origin {ray.origin[0], ray.origin[1], ray.origin[2]};
const array<float, 3> direction {ray.direction[0], ray.direction[1], ray.direction[2]};
const array<float, 3> invDirection {1.0f / ray.direction[0], 1.0f / ray.direction[1], 1.0f / ray.direction[2]};
float tmax = numeric_limits<float>::max();
float tClosestIntersection = numeric_limits<float>::max();
bool notFullyTraversed = true;
while(notFullyTraversed) {
if (node->isLeaf()) {
//test all primitives inside the leaf
for (auto p : node->primitives()) {
p->intersect(ray, tClosestIntersection, intersection, tmax);
}
//test if leaf + empty stack => return
if (nodeStack.empty()) {
notFullyTraversed = false;
} else {
//pop all stack
origin = mystack.top().origin;
tmax = mystack.top().tmax;
node = mystack.top().node;
mystack.pop();
}
} else {
//get axis of node and its split plane
const int axis = node->axis();
const float plane = node->splitposition();
//test if ray is not parallel to plane
if ((fabs(direction[axis]) > EPSILON)) {
const float t = (plane - origin[axis]) * invDirection[axis];
//case of the ray intersecting the plane, then test both childs
if (0.0f < t && t < tmax) {
//traverse near first, then far. Set tmax = t for near
tmax = t;
//push only far child onto stack
mystack.push({
(origin[axis] > plane ) ? node->leftChild() : node->rightChild() ,
tmax - t,
{origin[0] + direction[0] * t, origin[1] + direction[1] * t, origin[2] + direction[2] * t}
});
}
}
//in every case: traverse near child first
node = (origin[axis] > plane) ? node->rightChild() : node->leftChild();
}
}
return intersection.found;
It's not traversing the far child often enough. Where do I miss a related case?

One problem was small (original, wrong code):
//traverse near first, then far. Set tmax = t for near
tmax = t;
//push only far child onto stack
mystack.push({ ... , tmax - t, ... });
it always pushes 0.0f onto the stack as the exit distance for the far node, meaning no positive t is accepted for intersections.
Swapping both lines fixes that problem.
My resurcive stack trace / decisions are still different (Havran takes about ~25% more iterations) with an output picture having 99,5% of the same pixels. That's inside floating point rounding issues, but it still does not answer the question: What case is not recognized by that simplified implementation? Or what operation is not (numerically) stable enough in this version?

Related

Improper reflection in recursive ray tracer

I'm implementing a recursive ray tracer with reflection. The ray tracer is currently reflecting areas that are in shadow, and I don't know why. The shadow aspect of the ray tracer works as expected when the reflective code is commented out, so I don't think that's the issue.
Vec Camera::shade(Vec accumulator,
Ray ray,
vector<Surface*>surfaces,
vector<Light*>lights,
int recursion_depth) {
if (recursion_depth == 0) return Vec(0,0,0);
double closestIntersection = numeric_limits<double>::max();
Surface* cs;
for(unsigned int i=0; i < surfaces.size(); i++){
Surface* s = surfaces[i];
double intersection = s->intersection(ray);
if (intersection > EPSILON && intersection < closestIntersection) {
closestIntersection = intersection;
cs = s;
}
}
if (closestIntersection < numeric_limits<double>::max()) {
Point intersectionPoint = ray.origin + ray.dir*closestIntersection;
Vec intersectionNormal = cs->calculateIntersectionNormal(intersectionPoint);
Material materialToUse = cs->material;
for (unsigned int j=0; j<lights.size(); j++) {
Light* light = lights[j];
Vec dirToLight = (light->origin - intersectionPoint).norm();
Vec dirToCamera = (this->eye - intersectionPoint).norm();
bool visible = true;
for (unsigned int k=0; k<surfaces.size(); k++) {
Surface* s = surfaces[k];
double t = s->intersection(Ray(intersectionPoint, dirToLight));
if (t > EPSILON && t < closestIntersection) {
visible = false;
break;
}
}
if (visible) {
accumulator = accumulator + this->color(dirToLight, intersectionNormal,
intersectionPoint, dirToCamera, light, materialToUse);
}
}
//Reflective ray
//Vec r = d − 2(d · n)n
if (materialToUse.isReflective()) {
Vec d = ray.dir;
Vec r_v = d-intersectionNormal*2*intersectionNormal.dot(d);
Ray r(intersectionPoint+intersectionNormal*EPSILON, r_v);
//km is the ideal specular component of the material, and mult is component-wise multiplication
return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km);
}
else
return accumulator;
}
else
return accumulator;
}
Vec Camera::color(Vec dirToLight,
Vec intersectionNormal,
Point intersectionPoint,
Vec dirToCamera,
Light* light,
Material material) {
//kd I max(0, n · l) + ks I max(0, n · h)p
Vec I(light->r, light->g, light->b);
double dist = (intersectionPoint-light->origin).magnitude();
I = I/(dist*dist);
Vec h = (dirToLight + dirToCamera)/((dirToLight + dirToCamera).magnitude());
Vec kd = material.kd;
Vec ks = material.ks;
Vec diffuse = kd*I*fmax(0.0, intersectionNormal.dot(dirToLight));
Vec specular = ks*I*pow(fmax(0.0, intersectionNormal.dot(h)), material.r);
return diffuse+specular;
}
I've provided my output and the expected output. The lighting looks a bit different b/c mine was originally an .exr file and the other is a .png, but I've drawn arrows in my output where the surface should be reflecting shadows, but it's not.
A couple of things to check:
The visibility check in the inner for loop might be returning a false positive (i.e. it's calculating that all surfaces[k] are not closer to lights[j] than your intersection point, for some j). This would cause it to incorrectly add that light[j]'s contribution to your accumulator. This would result in missing shadows, but it ought to happen everywhere, including your top recursion level, whereas you're only seeing missing shadows in reflections.
There might an error in the color() method that's returning some wrong value that's then being incremented into accumulator. Although without seeing that code, it's hard to know for sure.
You're using postfix decrement on recursion_depth inside the materialToUse.IsReflective() check. Can you verify that the decremented value of recursion_depth is actually being passed to the shade() method call? (And if not, try changing to prefix decrement).
return this->shade(... recursion_depth--)...
EDIT: Can you also verify that recursion_depth is just a parameter to the shade() method, i.e. that there isn't a global / static recursion_depth anywhere. Assuming that there isn't (and there shouldn't be), you can change the call above to
return this->shade(... recursion_depth - 1)...
EDIT 2: A couple of other things to look at:
In color(), I don't understand why you're including the direction to the camera in your calculations. The color of intersections other than the first one, per pixel, ought to be independent of where the camera is. But I doubt that's the cause of this issue.
Verify that return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km); is doing the right thing with that matrix multiplication. Why are you multiplying by materialToUse.km?
Verify that materialToUse.km is constant per surface (i.e. it doesn't change over the geometry of the surface, the depth of iteration, or anything else).
Break up the statement return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km); into its component objects, so you can see the intermediate results in the debugger:
Vec reflectedColor = this->shade(accumulator, r, surfaces, lights, recursion_depth - 1);
Vec multipliedColor = reflectedColor.mult(materialToUse.km);
return multipliedColor;
Determine the image (x, y) coordinates of one of your problematic pixels. Set a conditional breakpoint that's triggered when rendering that pixel, and then step through your shade() method. Assuming you pick the pixel pointed to by the bottom right arrow in your example image, you ought to see one recursion into shade(). Stepping through that the first recurse, you'll see that your code is incorrectly adding the light contribution from the floor, when it should be in shadow.
To answer my own question: I was not checking that the t should be less than the distance from the intersection to light position.
Instead of:
if (t > EPSILON && t < closestIntersection) {
visible = false;
break;
}
it should be:
if (t > EPSILON && t < max_t) {
visible = false;
break;
}
where max_t is
double max_t = dirToLight.magnitude();
before dirToLight has been normalized.

Ray-triangle intersection algorithm not intersecting (C++)

I've been trying to implement the Moller-Trumbore ray-triangle intersection algorithm in my raytracing code. The code is supposed to read in a mesh and light sources, fire off rays from the light source, and return the triangle from the mesh which each ray intersects. Here is my implementation of the algorithm:
//Moller-Trumbore intersection algorithm
void getFaceIntersect(modelStruct m, ray r, hitFaceStruct& hitFaces)
{
// Constant thoughout loop
point origin = r.p0;
point direction = r.u;
hitFaces.isHit = false;
for (int i = 0; i < m.faces; i++)
{
// Get face vertices
point v1 = m.vertList[m.faceList[i].v1];
point v2 = m.vertList[m.faceList[i].v2];
point v3 = m.vertList[m.faceList[i].v3];
// Get two edgess
point edge1 = v2 - v1;
point edge2 = v3 - v1;
// Get p
point p = direction.cross(direction, edge2);
// Use p to find determinant
double det = p.dot(edge1, p);
// If the determinant is about 0, the ray lies in the plane of the triangle
if (abs(det) < 0.00000000001)
{
continue;
}
double inverseDet = 1 / det;
point v1ToOrigin = (origin - v1);
double u = v1ToOrigin.dot(v1ToOrigin, p) * inverseDet;
// If u is not between 0 and 1, no hit
if (u < 0 || u > 1)
{
continue;
}
// Used for calculating v
point q = v1ToOrigin.cross(v1ToOrigin, edge1);
double v = direction.dot(direction, q) * inverseDet;
if (v < 0 || (u + v) > 1)
{
continue;
}
double t = q.dot(edge2, q) * inverseDet;
// gets closest face
if (t < abs(hitFaces.s)) {
hitFaceStruct goodStruct = hitFaceStruct();
goodStruct.face = i;
goodStruct.hitPoint = p;
goodStruct.isHit = true;
goodStruct.s = t;
hitFaces = goodStruct;
break;
}
}
}
The relevant code for hitFaceStruct and modelStruct is as follows:
typedef struct _hitFaceStruct
{
int face; // the index of the sphere in question in the list of faces
float s; // the distance from the ray that hit it
bool isHit;
point hitPoint;
} hitFaceStruct;
typedef struct _modelStruct {
char *fileName;
float scale;
float rot_x, rot_y, rot_z;
float x, y, z;
float r_amb, g_amb, b_amb;
float r_dif, g_dif, b_dif;
float r_spec, g_spec, b_spec;
float k_amb, k_dif, k_spec, k_reflective, k_refractive;
float spec_exp, index_refraction;
int verts, faces, norms = 0; // Number of vertices, faces, normals, and spheres in the system
point *vertList, *normList; // Vertex and Normal Lists
faceStruct *faceList; // Face List
} modelStruct;
Whenever I shoot a ray, the values of u or v in the algorithm code always come out to a large negative number, rather than the expected small, positive one. The direction vector of the ray is normalized before I pass it on to the intersection code, and I'm positive I'm firing rays that would normally hit the mesh. Can anyone please help me spot my error here?
Thanks!

Multithreading returns an unhandled exception for storing information

I will try to explain my problem as clear as possible. I have a multithreading framework I have to work on. It's a path tracer renderer. It gives me error when I try to store some information provided by my threads. Trying to avoid posting all the code, I will explain what I mean step by step:
my TileTracer class is a thread
class TileTracer : public Thread{
...
}
and I have a certain number of threads:
#define MAXTHREADS 32
TileTracer* worker[MAXTHREADS];
the number of working threads is set in the following initialization code, where the threads are also started:
void Renderer::Init(){
accumulator = (vec3*)MALLOC64(sizeof(vec3)* SCRWIDTH * SCRHEIGHT);
memset(accumulator, 0, SCRWIDTH * SCRHEIGHT * sizeof(vec3));
SYSTEM_INFO systeminfo;
GetSystemInfo(&systeminfo);
int cores = systeminfo.dwNumberOfProcessors;
workerCount = MIN(MAXTHREADS, cores);
for (int i = 0; i < workerCount; i++)
{
goSignal[i] = CreateEvent(NULL, FALSE, FALSE, 0);
doneSignal[i] = CreateEvent(NULL, FALSE, FALSE, 0);
}
// create and start worker threads
for (int i = 0; i < workerCount; i++)
{
worker[i] = new TileTracer();
worker[i]->init(accumulator, i);
worker[i]->start(); //start the thread
}
samples = 0;
}
the init() method for my thread is simply defined in my header as the following:
void init(vec3* target, int idx) { accumulator = target, threadIdx = idx; }
while the start() is:
void Thread::start()
{
DWORD tid = 0;
m_hThread = (unsigned long*)CreateThread( NULL, 0, (LPTHREAD_START_ROUTINE)sthread_proc, (Thread*)this, 0, &tid );
setPriority( Thread::P_NORMAL );
}
somehow (I don't get exactly where), each thread calls the following main method which is meant to define the color of a pixel (you don't have to understand it all):
vec3 TileTracer::Sample(vec3 O, vec3 D, int depth){
vec3 color(0, 0, 0);
// trace path extension ray
float t = 1000.0f, u, v;
Triangle* tri = 0;
Scene::mbvh->pool4[0].TraceEmbree(O, D, t, u, v, tri, false);
totalRays++;
// handle intersection, if any
if (tri)
{
// determine material color at intersection point
Material* mat = Scene::matList[tri->material];
Texture* tex = mat->GetTexture();
vec3 diffuse;
if (tex)
{
...
}
else diffuse = mat->GetColor();
vec3 I = O + t * D; //we get exactly to the intersection point on the object
//we need to store the info of each bounce of the basePath for the offsetPaths
basePath baseInfo = { O, D, I, tri };
basePathHits.push_back(baseInfo);
vec3 L = vec3(-1 + Rand(2.0f), 20, 9 + Rand(2.0f)) - I; //(-1,20,9) is Hard-code of the light position, and I add Rand(2.0f) on X and Z axis
//so that I have an area light instead of a point light
float dist = length(L) * 0.99f; //if I cast a ray towards the light source I don't want to hit the source point or the light source
//otherwise it counts as a shadow even if there is not. So I make the ray a bit shorter by multiplying it for 0.99
L = normalize(L);
float ndotl = dot(tri->N, L);
if (ndotl > 0)
{
Triangle* tri = 0;
totalRays++;
Scene::mbvh->pool4[0].TraceEmbree(I + L * EPSILON, L, dist, u, v, tri, true);//it just calculates the distance by throwing a ray
//I am just interested in understanding if I hit something or not
//if I don't hit anything I calculate the light transport (diffuse * ndotL * lightBrightness * 1/dist^2
if (!tri) color += diffuse * ndotl * vec3(1000.0f, 1000.0f, 850.0f) * (1.0f / (dist * dist));
}
// continue random walk since it is a path tracer (we do it only if we have less than 20 bounces)
if (depth < 20)
{
// russian roulette
float Psurvival = CLAMP((diffuse.r + diffuse.g + diffuse.b) * 0.33333f, 0.2f, 0.8f);
if (Rand(1.0f) < Psurvival)
{
vec3 R = DiffuseReflectionCosineWeighted(tri->N);//there is weight
color += diffuse * Sample(I + R * EPSILON, R, depth + 1) * (1.0f / Psurvival);
}
}
}
return color;
}
Now, you don't have to understand the whole code for sure because my question is the following: if you notice, in the last method there are the 2 following code lines:
basePath baseInfo = { O, D, I, tri };
basePathHits.push_back(baseInfo);
I just create a simple struct "basePath" defined as follows:
struct basePath
{
vec3 O, D, hit;
Triangle* tri;
};
and I store it in a vector of struct defined at the beginning of my code:
vector<basePath> basePathHits;
The problem is that this seems bringing an exception. Indeed if I try to store these information, that I need later in my code, the program crashes giving me the exception:
Unhandled exception at 0x0FD4FAC1 (msvcr120d.dll) in Template.exe: 0xC0000005: Access violation reading location 0x3F4C1BC1.
Some other times, without changing anything, the error is different and it's the following one:
While, without storing those info, everything works perfectly. Likewise, if I set the number of cores to 1, everything works. So, how come multithreading doesn't allow me to do it? Do not hesitate to ask further info if these are not enough.
Try making the following change to your code:
//we need to store the info of each bounce of the basePath for the offsetPaths
basePath baseInfo = { O, D, I, tri };
static std::mutex myMutex;
myMutex.lock();
basePathHits.push_back(baseInfo);
myMutex.unlock();
If that removes the exceptions then the problem is unsychronised access to basePathHits (i.e. multiple threads calling push_back simultaneously). You need to think carefully about what the best solution to this will be, to minimise the impact of synchronisation on performance.
Possible I did'nt see it, but there is no protection for the target - no mutex or atomic. And as far as I know std::vector needs this for multithreading.

Refraction in Raytracing?

I've been working on my raytracer again. I added reflection and multithreading support. Currently I am working on adding refractions, but its only half working.
As you can see, there is a center sphere(without specular highlight), a reflecting sphere(to the right) and a refracting sphere(left). I'm pretty happy about reflections, it does look very good. For refractions its kinda working...the light is refracted and all shadows of the spheres are visible in the sphere(refraction index 1.4), but there is an outer black ring.
EDIT: Apparently the black ring gets bigger, and therefore the sphere smaller, when I increase the refraction index of the sphere. On the contrary, when decreasing the index of refraction, the Sphere gets larger and the black ring smaller...until, with index of refraction set to one, the ring totally disappears.
IOR = 1.9
IOR = 1.1
IOR = 1.00001
And interestingly enough at IOR = 1 the sphere loses its transparency and becomes white.
I think I covered total internal reflection and it is not the issue here.
Now the code:
I'm using the operator | for dot product, so (vec|vec) is a dot product and the operator ~ to invert vectors. The objects, both ligths and spheres are stored in Object **objects;.
Raytrace function
Colour raytrace(const Ray &r, const int &depth)
{
//first find the nearest intersection of a ray with an object
Colour finalColour = skyBlue *(r.getDirection()|Vector(0,0,-1)) * SKY_FACTOR;
double t, t_min = INFINITY;
int index_nearObj = -1;
for(int i = 0; i < objSize; i++)
{
if(!dynamic_cast<Light *>(objects[i]))//skip light src
{
t = objects[i]->findParam(r);
if(t > 0 && t < t_min)
{
t_min = t;
index_nearObj = i;
}
}
}
//no intersection
if(index_nearObj < 0)
return finalColour;
Vector intersect = r.getOrigin() + r.getDirection()*t_min;
Vector normal = objects[index_nearObj]->NormalAtIntersect(intersect);
Colour objectColor = objects[index_nearObj]->getColor();
Ray rRefl, rRefr; //reflected and refracted Ray
Colour refl = finalColour, refr = finalColour; //reflected and refracted colours
double reflectance = 0, transmittance = 0;
if(objects[index_nearObj]->isReflective() && depth < MAX_TRACE_DEPTH)
{
//handle reflection
rRefl = objects[index_nearObj]->calcReflectingRay(r, intersect, normal);
refl = raytrace(rRefl, depth + 1);
reflectance = 1;
}
if(objects[index_nearObj]->isRefractive() && depth < MAX_TRACE_DEPTH)
{
//handle transmission
rRefr = objects[index_nearObj]->calcRefractingRay(r, intersect, normal, reflectance, transmittance);
refr = raytrace(rRefr, depth + 1);
}
Ray rShadow; //shadow ray
bool shadowed;
double t_light = -1;
Colour localColour;
Vector tmpv;
//get material properties
double ka = 0.2; //ambient coefficient
double kd; //diffuse coefficient
double ks; //specular coefficient
Colour ambient = ka * objectColor; //ambient component
Colour diffuse, specular;
double brightness;
localColour = ambient;
//look if the object is in shadow or light
//do this by casting a ray from the obj and
// check if there is an intersection with another obj
for(int i = 0; i < objSize; i++)
{
if(dynamic_cast<Light *>(objects[i])) //if object is a light
{
//for each light
shadowed = false;
//create Ray to light
tmpv = objects[i]->getPosition() - intersect;
rShadow = Ray(intersect + (!tmpv) * BIAS, tmpv);
t_light = objects[i]->findParam(rShadow);
if(t_light < 0) //no imtersect, which is quite impossible
continue;
//then we check if that Ray intersects one object that is not a light
for(int j = 0; j < objSize; j++)
{
if(!dynamic_cast<Light *>(objects[j]) && j != index_nearObj)//if obj is not a light
{
t = objects[j]->findParam(rShadow);
//if it is smaller we know the light is behind the object
//--> shadowed by this light
if (t >= 0 && t < t_light)
{
// Set the flag and stop the cycle
shadowed = true;
break;
}
}
}
if(!shadowed)
{
rRefl = objects[index_nearObj]->calcReflectingRay(rShadow, intersect, normal);
//reflected ray from ligh src, for ks
kd = maximum(0.0, (normal|rShadow.getDirection()));
if(objects[index_nearObj]->getShiny() <= 0)
ks = 0;
else
ks = pow(maximum(0.0, (r.getDirection()|rRefl.getDirection())), objects[index_nearObj]->getShiny());
diffuse = kd * objectColor;// * objects[i]->getColour();
specular = ks * objects[i]->getColor();
brightness = 1 /(1 + t_light * DISTANCE_DEPENDENCY_LIGHT);
localColour += brightness * (diffuse + specular);
}
}
}
finalColour = localColour + (transmittance * refr + reflectance * refl);
return finalColour;
}
Now the function that calculates the refracted Ray, I used several different sites for resource, and each had similar algorithms. This is the best I could do so far. It may just be a tiny detail I'm not seeing...
Ray Sphere::calcRefractingRay(const Ray &r, const Vector &intersection,Vector &normal, double & refl, double &trans)const
{
double n1, n2, n;
double cosI = (r.getDirection()|normal);
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;//invert
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
n = n1/n2;
double sinT2 = n*n * (1.0 - cosI * cosI);
double cosT = sqrt(1.0 - sinT2);
//fresnel equations
double rn = (n1 * cosI - n2 * cosT)/(n1 * cosI + n2 * cosT);
double rt = (n2 * cosI - n1 * cosT)/(n2 * cosI + n2 * cosT);
rn *= rn;
rt *= rt;
refl = (rn + rt)*0.5;
trans = 1.0 - refl;
if(n == 1.0)
return r;
if(cosT*cosT < 0.0)//tot inner refl
{
refl = 1;
trans = 0;
return calcReflectingRay(r, intersection, normal);
}
Vector dir = n * r.getDirection() + (n * cosI - cosT)*normal;
return Ray(intersection + dir * BIAS, dir);
}
EDIT: I also changed the refraction index around.From
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
to
if(cosI > 0.0)
{
n1 = getRefrIndex();
n2 = 1.0;
normal = ~normal;
}
else
{
n1 = 1.0;
n2 = getRefrIndex();
cosI = -cosI;
}
Then I get this, and almost the same(still upside down) with an index of refraction at 1!
And the reflection calculation:
Ray Sphere::calcReflectingRay(const Ray &r, const Vector &intersection, const Vector &normal)const
{
Vector rdir = r.getDirection();
Vector dir = rdir - 2 * (rdir|normal) * normal;
return Ray(intersection + dir*BIAS, dir);
//the Ray constructor automatically normalizes directions
}
So my question is: How do I fix the outer black circle? Which version is correct?
Help is greatly appreciated :)
This is compiled on Linux using g++ 4.8.2.
Warning: the following is a guess, not a certainty. I'd have to look at the code in more detail to be sure what's happening and why.
That said, it looks to me like your original code is basically simulating a concave lens instead of convex.
A convex lens is basically a magnifying lens, bringing light rays from a relatively small area into focus on a plane:
This also shows why the corrected code shows an upside-down image. The rays of light coming from the top on one side get projected to the bottom on the other (and vice versa).
Getting back to the concave lens though: a concave lens is a reducing lens that shows a wide angle of picture from in front of the lens:
If you look at the bottom right corner here, it shows what I suspect is the problem: especially with a high index of refraction, the rays of light trying to come into the lens intersect the edge of the lens itself. For all the angles wider than that, you're typically going to see a black ring, because the front edge of the lens is acting as a shade to prevent light from entering.
Increasing the index of refraction increases the width of that black ring, because the light is bent more, so a larger portion at the edges is intersecting the outer edge of the lens.
In case you care about how they avoid this with things like wide-angle camera lenses, the usual route is to use a meniscus lens, at least for the front element:
This isn't a panacea, but does at least prevent incoming light rays from intersecting the outer edge of the front lens element. Depending on exactly how wide an angle the lens needs to cover, it'll often be quite a bit less radical of a meniscus than this (and in some cases it'll be a plano-concave) but you get the general idea.
Final warning: of course, all of these are hand-drawn, and intended only to give general idea, not (for example) reflect the design of any particular lens, an element with any particular index of refraction, etc.
I stumbled across this exact issue as well when working on a ray tracer. #lightxbulb's comment about normalizing the ray direction vector fixed this problem for me.
Firstly, keep your code that computes the refraction indices prior to your edit. In other words, you should be seeing those black rings in your renderings.
Then, in your calcRefractingRay function where you compute cosI, use the dot product of normalize(r.getDirection()) and normal. Currently you're taking the dot product of r.getDirection() and normal.
Secondly, when you compute the refracted ray direction dir, use normalize(r.getDirection()) instead of r.getDirection(). Again, you're currently using
r.getDirection() in your calculation.
Also, there is an issue with the way you're checking for total internal reflection. You should check that the term you're taking the square root of (1.0 - sinT2) is non-negative before actually computing the square root.
Hope that helps!

Why does raytracer render spheres as ovals?

I've been hacking up a raytracer for the first time over the past few days. However, there are a few quirks which bother me and I don't really know how to work out. One that has been there since the beginning is the shape of spheres in the scene - when rendered, they actually look like ovals. Of course, there is perspective in the scene, but the final shape still seems odd. I have attached a sample rendering, the problem I have is especially visible on the reflective sphere in the lower left part of the image.
I don't really know what could be causing this. It might be the ray-sphere intersection code which looks as follows:
bool Sphere::intersect(Ray ray, glm::vec3& hitPoint) {
//Compute A, B and C coefficients
float a = glm::dot(ray.dir, ray.dir);
float b = 2.0 * glm::dot(ray.dir, ray.org-pos);
float c = glm::dot(ray.org-pos, ray.org-pos) - (rad * rad);
// Find discriminant
float disc = b * b - 4 * a * c;
// if discriminant is negative there are no real roots, so return
// false as ray misses sphere
if (disc < 0)
return false;
// compute q
float distSqrt = sqrt(disc);
float q;
if (b < 0)
q = (-b - distSqrt)/2.0;
else
q = (-b + distSqrt)/2.0;
// compute t0 and t1
float t0 = q / a;
float t1 = c / q;
// make sure t0 is smaller than t1
if (t0 > t1) {
// if t0 is bigger than t1 swap them around
float temp = t0;
t0 = t1;
t1 = temp;
}
// if t1 is less than zero, the object is in the ray's negative direction
// and consequently the ray misses the sphere
if (t1 < 0)
return false;
// if t0 is less than zero, the intersection point is at t1
if (t0 < 0) {
hitPoint = ray.org + t1 * ray.dir;
return true;
} else { // else the intersection point is at t0
hitPoint = ray.org + t0 * ray.dir;
return true;
}
}
Or it could be another thing. Does anyone have an idea? Thanks so much!
It looks like you're using a really wide field of view (FoV). This gives the effect of a fish-eye lens, distorting the picture, especially towards the edges. Typically something like 90 degrees (i.e. 45 degrees in either direction) gives a reasonable picture.
The refraction actually looks quite good; it's inverted because the index of refraction is so high. Nice pictures are in this question.