Simple Ray Tracing with Lambertian Shading, Confusion - c++

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

Related

Optimizing a Ray Tracer

I'm tasked with optimizing the following ray tracer:
void Scene::RayTrace()
{
for (int v = 0; v < fb->h; v++) // all vertical pixels in framebuffer
{
calculateFPS(); // calculates the current fps and prints it
for (int u = 0; u < fb->w; u++) // all horizontal pixels in framebuffer
{
fb->Set(u, v, 0xFFAAAAAA); // background color
fb->SetZ(u, v, FLT_MAX); // sets the Z values to all be maximum at beginning
V3 ray = (ppc->c + ppc->a*((float)u + .5f) + ppc->b*((float)v + .5f)).UnitVector(); // gets the camera ray
for (int tmi = 0; tmi < tmeshesN; tmi++) // iterates over all triangle meshes
{
if (!tmeshes[tmi]->enabled) // doesn't render a tmesh if it's not set to be enabled
continue;
for (int tri = 0; tri < tmeshes[tmi]->trisN; tri++) // iterates over all triangles in the mesh
{
V3 Vs[3]; // triangle vertices
Vs[0] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 0]];
Vs[1] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 1]];
Vs[2] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 2]];
V3 bgt = ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); // I don't entirely understand what this does
if (bgt[2] < 0.0f || bgt[0] < 0.0f || bgt[1] < 0.0f || bgt[0] + bgt[1] > 1.0f)
continue;
if (fb->zb[(fb->h - 1 - v)*fb->w + u] < bgt[2])
continue;
fb->SetZ(u, v, bgt[2]);
float alpha = 1.0f - bgt[0] - bgt[1];
float beta = bgt[0];
float gamma = bgt[1];
V3 Cs[3]; // triangle vertex colors
Cs[0] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 0]];
Cs[1] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 1]];
Cs[2] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 2]];
V3 color = Cs[0] * alpha + Cs[1] * beta + Cs[2] * gamma;
fb->Set(u, v, color.GetColor()); // sets this pixel accordingly
}
}
}
fb->redraw();
Fl::check();
}
}
Two things:
I don't entirely understand what ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); does. Can anyone explain this, in terms of ray-tracing, to me? Here is the function inside my "Planar Pinhole Camera" class (this function was given to me):
V3 V3::IntersectRayWithTriangleWithThisOrigin(V3 r, V3 Vs[3])
{
M33 m; // 3X3 matrix class
m.SetColumn(0, Vs[1] - Vs[0]);
m.SetColumn(1, Vs[2] - Vs[0]);
m.SetColumn(2, r*-1.0f);
V3 ret; // Vector3 class
V3 &C = *this;
ret = m.Inverse() * (C - Vs[0]);
return ret;
}
The basic steps of this are apparent, I just don't see what it's actually doing.
How would I go about optimizing this ray-tracer from here? I've found something online about "kd trees," but I'm unsure how complex they are. Does anyone have some good resources on simple solutions for optimizing this? I've had some difficulty deciphering what's out there.
Thanks!
Probably the largest optimisation by far would be to use some sort of bounding volume hierarchy. Right now the code intersects all rays with all triangles of all objects. With a BVH, we instead ask: "given this ray, which triangles intersect?" This means that for each ray, you generally only need to test for intersection with a handful of primitives and triangles, rather than every single triangle in the scene.
IntersectRayWithTriangleWithThisOrigin
from the look of it
it creates inverse transform matrix from the triangle edges (triangle basis vectors are X,Y)
do not get the Z axis I would expect the ray direction there and not position of pixel (ray origin)
but can be misinterpreting something
anyway the inverse matrix computation is the biggest problem
you are computing it for each triangle per pixel that is a lot
faster would be having computed inverse transform matrix of each triangle before raytracing (once)
where X,Y are the basis and Z is perpendicular to booth of them facing always the same direction to camera
and then just transform your ray into it and check for limits of intersection
that is just matrix*vector and few ifs instead of inverse matrix computation
another way would be to algebraically solve ray vs. plane intersection
that should lead to much simpler equation then matrix inversion
after that is that just a mater of basis vector bound checking

Refraction in Raytracing?

I've been working on my raytracer again. I added reflection and multithreading support. Currently I am working on adding refractions, but its only half working.
As you can see, there is a center sphere(without specular highlight), a reflecting sphere(to the right) and a refracting sphere(left). I'm pretty happy about reflections, it does look very good. For refractions its kinda working...the light is refracted and all shadows of the spheres are visible in the sphere(refraction index 1.4), but there is an outer black ring.
EDIT: Apparently the black ring gets bigger, and therefore the sphere smaller, when I increase the refraction index of the sphere. On the contrary, when decreasing the index of refraction, the Sphere gets larger and the black ring smaller...until, with index of refraction set to one, the ring totally disappears.
IOR = 1.9
IOR = 1.1
IOR = 1.00001
And interestingly enough at IOR = 1 the sphere loses its transparency and becomes white.
I think I covered total internal reflection and it is not the issue here.
Now the code:
I'm using the operator | for dot product, so (vec|vec) is a dot product and the operator ~ to invert vectors. The objects, both ligths and spheres are stored in Object **objects;.
Raytrace function
Colour raytrace(const Ray &r, const int &depth)
{
//first find the nearest intersection of a ray with an object
Colour finalColour = skyBlue *(r.getDirection()|Vector(0,0,-1)) * SKY_FACTOR;
double t, t_min = INFINITY;
int index_nearObj = -1;
for(int i = 0; i < objSize; i++)
{
if(!dynamic_cast<Light *>(objects[i]))//skip light src
{
t = objects[i]->findParam(r);
if(t > 0 && t < t_min)
{
t_min = t;
index_nearObj = i;
}
}
}
//no intersection
if(index_nearObj < 0)
return finalColour;
Vector intersect = r.getOrigin() + r.getDirection()*t_min;
Vector normal = objects[index_nearObj]->NormalAtIntersect(intersect);
Colour objectColor = objects[index_nearObj]->getColor();
Ray rRefl, rRefr; //reflected and refracted Ray
Colour refl = finalColour, refr = finalColour; //reflected and refracted colours
double reflectance = 0, transmittance = 0;
if(objects[index_nearObj]->isReflective() && depth < MAX_TRACE_DEPTH)
{
//handle reflection
rRefl = objects[index_nearObj]->calcReflectingRay(r, intersect, normal);
refl = raytrace(rRefl, depth + 1);
reflectance = 1;
}
if(objects[index_nearObj]->isRefractive() && depth < MAX_TRACE_DEPTH)
{
//handle transmission
rRefr = objects[index_nearObj]->calcRefractingRay(r, intersect, normal, reflectance, transmittance);
refr = raytrace(rRefr, depth + 1);
}
Ray rShadow; //shadow ray
bool shadowed;
double t_light = -1;
Colour localColour;
Vector tmpv;
//get material properties
double ka = 0.2; //ambient coefficient
double kd; //diffuse coefficient
double ks; //specular coefficient
Colour ambient = ka * objectColor; //ambient component
Colour diffuse, specular;
double brightness;
localColour = ambient;
//look if the object is in shadow or light
//do this by casting a ray from the obj and
// check if there is an intersection with another obj
for(int i = 0; i < objSize; i++)
{
if(dynamic_cast<Light *>(objects[i])) //if object is a light
{
//for each light
shadowed = false;
//create Ray to light
tmpv = objects[i]->getPosition() - intersect;
rShadow = Ray(intersect + (!tmpv) * BIAS, tmpv);
t_light = objects[i]->findParam(rShadow);
if(t_light < 0) //no imtersect, which is quite impossible
continue;
//then we check if that Ray intersects one object that is not a light
for(int j = 0; j < objSize; j++)
{
if(!dynamic_cast<Light *>(objects[j]) && j != index_nearObj)//if obj is not a light
{
t = objects[j]->findParam(rShadow);
//if it is smaller we know the light is behind the object
//--> shadowed by this light
if (t >= 0 && t < t_light)
{
// Set the flag and stop the cycle
shadowed = true;
break;
}
}
}
if(!shadowed)
{
rRefl = objects[index_nearObj]->calcReflectingRay(rShadow, intersect, normal);
//reflected ray from ligh src, for ks
kd = maximum(0.0, (normal|rShadow.getDirection()));
if(objects[index_nearObj]->getShiny() <= 0)
ks = 0;
else
ks = pow(maximum(0.0, (r.getDirection()|rRefl.getDirection())), objects[index_nearObj]->getShiny());
diffuse = kd * objectColor;// * objects[i]->getColour();
specular = ks * objects[i]->getColor();
brightness = 1 /(1 + t_light * DISTANCE_DEPENDENCY_LIGHT);
localColour += brightness * (diffuse + specular);
}
}
}
finalColour = localColour + (transmittance * refr + reflectance * refl);
return finalColour;
}
Now the function that calculates the refracted Ray, I used several different sites for resource, and each had similar algorithms. This is the best I could do so far. It may just be a tiny detail I'm not seeing...
Ray Sphere::calcRefractingRay(const Ray &r, const Vector &intersection,Vector &normal, double & refl, double &trans)const
{
double n1, n2, n;
double cosI = (r.getDirection()|normal);
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;//invert
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
n = n1/n2;
double sinT2 = n*n * (1.0 - cosI * cosI);
double cosT = sqrt(1.0 - sinT2);
//fresnel equations
double rn = (n1 * cosI - n2 * cosT)/(n1 * cosI + n2 * cosT);
double rt = (n2 * cosI - n1 * cosT)/(n2 * cosI + n2 * cosT);
rn *= rn;
rt *= rt;
refl = (rn + rt)*0.5;
trans = 1.0 - refl;
if(n == 1.0)
return r;
if(cosT*cosT < 0.0)//tot inner refl
{
refl = 1;
trans = 0;
return calcReflectingRay(r, intersection, normal);
}
Vector dir = n * r.getDirection() + (n * cosI - cosT)*normal;
return Ray(intersection + dir * BIAS, dir);
}
EDIT: I also changed the refraction index around.From
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
to
if(cosI > 0.0)
{
n1 = getRefrIndex();
n2 = 1.0;
normal = ~normal;
}
else
{
n1 = 1.0;
n2 = getRefrIndex();
cosI = -cosI;
}
Then I get this, and almost the same(still upside down) with an index of refraction at 1!
And the reflection calculation:
Ray Sphere::calcReflectingRay(const Ray &r, const Vector &intersection, const Vector &normal)const
{
Vector rdir = r.getDirection();
Vector dir = rdir - 2 * (rdir|normal) * normal;
return Ray(intersection + dir*BIAS, dir);
//the Ray constructor automatically normalizes directions
}
So my question is: How do I fix the outer black circle? Which version is correct?
Help is greatly appreciated :)
This is compiled on Linux using g++ 4.8.2.
Warning: the following is a guess, not a certainty. I'd have to look at the code in more detail to be sure what's happening and why.
That said, it looks to me like your original code is basically simulating a concave lens instead of convex.
A convex lens is basically a magnifying lens, bringing light rays from a relatively small area into focus on a plane:
This also shows why the corrected code shows an upside-down image. The rays of light coming from the top on one side get projected to the bottom on the other (and vice versa).
Getting back to the concave lens though: a concave lens is a reducing lens that shows a wide angle of picture from in front of the lens:
If you look at the bottom right corner here, it shows what I suspect is the problem: especially with a high index of refraction, the rays of light trying to come into the lens intersect the edge of the lens itself. For all the angles wider than that, you're typically going to see a black ring, because the front edge of the lens is acting as a shade to prevent light from entering.
Increasing the index of refraction increases the width of that black ring, because the light is bent more, so a larger portion at the edges is intersecting the outer edge of the lens.
In case you care about how they avoid this with things like wide-angle camera lenses, the usual route is to use a meniscus lens, at least for the front element:
This isn't a panacea, but does at least prevent incoming light rays from intersecting the outer edge of the front lens element. Depending on exactly how wide an angle the lens needs to cover, it'll often be quite a bit less radical of a meniscus than this (and in some cases it'll be a plano-concave) but you get the general idea.
Final warning: of course, all of these are hand-drawn, and intended only to give general idea, not (for example) reflect the design of any particular lens, an element with any particular index of refraction, etc.
I stumbled across this exact issue as well when working on a ray tracer. #lightxbulb's comment about normalizing the ray direction vector fixed this problem for me.
Firstly, keep your code that computes the refraction indices prior to your edit. In other words, you should be seeing those black rings in your renderings.
Then, in your calcRefractingRay function where you compute cosI, use the dot product of normalize(r.getDirection()) and normal. Currently you're taking the dot product of r.getDirection() and normal.
Secondly, when you compute the refracted ray direction dir, use normalize(r.getDirection()) instead of r.getDirection(). Again, you're currently using
r.getDirection() in your calculation.
Also, there is an issue with the way you're checking for total internal reflection. You should check that the term you're taking the square root of (1.0 - sinT2) is non-negative before actually computing the square root.
Hope that helps!

raytracing: why is my sphere rendered as an oval?

I am to write a raytracer, however I already seem to hit my first big problem. For whatever reason, my sphere (which - since I only begin - I simply color white when a ray hits) is rendered as an oval.
Furthermore, it seems that the distortion is getting worse, the farther I am moving the sphere's center away from x = 0 and y = 0
Here's the intersection and main-loop code:
double const Sphere::getIntersection(Ray const& ray) const
{
double t;
double A = 1;
double B = 2*( ray.dir[0]*(ray.origin[0] - center_[0]) + ray.dir[1] * (ray.origin[1] - center_[1]) + ray.dir[2] * (ray.origin[2] - center_[2]));
double C = pow(ray.origin[0]-center_[0], 2) + pow(ray.origin[1]-center_[1], 2) + pow(ray.origin[2] - center_[2], 2) - radius_pow2_;
double discr = B*B - 4*C;
if(discr > 0)
{
t = (-B - sqrt(discr))/2;
if(t <= 0)
{
t = (-B + sqrt(discr))/2;
}
}
else t = 0;
return t;
}
Sphere blub = Sphere(math3d::point(300., 300., -500.), 200.);
Ray mu = Ray();
// for all pixels of window
for (std::size_t y = 0; y < window.height(); ++y) {
for (std::size_t x = 0; x < window.width(); ++x) {
Pixel p(x, y);
mu = Ray(math3d::point(0., 0., 0.), math3d::vector(float(x), float(y), -300.));
if (blub.getIntersection(mu) == 0. ) {
p.color = Color(0.0, 0.0, 0.0);
} else {
p.color = Color(1., 1., 1.);
}
}
}
What I also do not understand is why my "oval" isn't centered on the picture. I have a window of 600 x 600 pixels, so putting the sphere's center at 300 x 300 should afaik put the sphere in the center of the window as well.
my specific solution
(Thanks to Thomas for pushing me to the right direction!)
As Thomas rightly said, my questions where two distinct problems. Considering projecting the sphere in the center, I did as he suggested and changed the origin and projections of the rays.
To get the perspective right, I did not realize I already had to calculate the focal length from the dimensions.
focal_length = sqrt(width^2 + height^2) / ( 2*tan( 45/2 ) )
The result:
This is normal for linear perspective projections, and exacerbated by wide camera angles; see http://en.wikipedia.org/wiki/Perspective_projection_distortion. Most games use something like 90 degrees in the horizontal direction, 45 on either side. But by casting rays up across 600 pixels in the x direction but 300 in the z direction, yours is significantly wider, 126 degrees to be precise.
The reason why your sphere doesn't appear centered is that you're casting rays from the bottom left corner of the screen:
mu = Ray(math3d::point(0.,0.,0.),math3d::vector(float(x),float(y),-300.));
That should be something like:
mu = Ray(math3d::point(width/2,height/2,0.),math3d::vector(float(x-width/2),float(y-height/2),-300.));

Intersection problems with ray-sphere intersection

I'm writing a simple ray tracer and to keep it simple for now I've decided to just have spheres in my scene. I am at a stage now where I merely want to confirm that my rays are intersecting a sphere in the scene properly, nothing else. I've created a Ray and Sphere class and then a function in my main file which goes through each pixel to see if there's an intersection (relevant code will be posted below). The problem is that the whole intersection with the sphere is acting rather strangely. If I create a sphere with center (0, 0, -20) and a radius of 1 then I get only one intersection which is always at the very first pixel of what would be my image (upper-left corner). Once I reach a radius of 15 I suddenly get three intersections in the upper-left region. A radius of 18 gives me six intersections and once I reach a radius of 20+ I suddenly get an intersection for EACH pixel so something is acting as it's not supposed to do.
I was suspicious that my ray-sphere intersection code might be at fault here but having looked through it and looked through the net for more information most solutions describe the very same approach I use so I assume it shouldn't(!) be at fault here. So...I am not exactly sure what I am doing wrong, it could be my intersection code or it could be something else causing the problems. I just can't seem to find it. Could it be that I am thinking wrong when giving values for the sphere and rays? Below is relevant code
Sphere class:
Sphere::Sphere(glm::vec3 center, float radius)
: m_center(center), m_radius(radius), m_radiusSquared(radius*radius)
{
}
//Sphere-ray intersection. Equation: (P-C)^2 - R^2 = 0, P = o+t*d
//(P-C)^2 - R^2 => (o+t*d-C)^2-R^2 => o^2+(td)^2+C^2+2td(o-C)-2oC-R^2
//=> at^2+bt+c, a = d*d, b = 2d(o-C), c = (o-C)^2-R^2
//o = ray origin, d = ray direction, C = sphere center, R = sphere radius
bool Sphere::intersection(Ray& ray) const
{
//Squared distance between ray origin and sphere center
float squaredDist = glm::dot(ray.origin()-m_center, ray.origin()-m_center);
//If the distance is less than the squared radius of the sphere...
if(squaredDist <= m_radiusSquared)
{
//Point is in sphere, consider as no intersection existing
//std::cout << "Point inside sphere..." << std::endl;
return false;
}
//Will hold solution to quadratic equation
float t0, t1;
//Calculating the coefficients of the quadratic equation
float a = glm::dot(ray.direction(),ray.direction()); // a = d*d
float b = 2.0f*glm::dot(ray.direction(),ray.origin()-m_center); // b = 2d(o-C)
float c = glm::dot(ray.origin()-m_center, ray.origin()-m_center) - m_radiusSquared; // c = (o-C)^2-R^2
//Calculate discriminant
float disc = (b*b)-(4.0f*a*c);
if(disc < 0) //If discriminant is negative no intersection happens
{
//std::cout << "No intersection with sphere..." << std::endl;
return false;
}
else //If discriminant is positive one or two intersections (two solutions) exists
{
float sqrt_disc = glm::sqrt(disc);
t0 = (-b - sqrt_disc) / (2.0f * a);
t1 = (-b + sqrt_disc) / (2.0f * a);
}
//If the second intersection has a negative value then the intersections
//happen behind the ray origin which is not considered. Otherwise t0 is
//the intersection to be considered
if(t1<0)
{
//std::cout << "No intersection with sphere..." << std::endl;
return false;
}
else
{
//std::cout << "Intersection with sphere..." << std::endl;
return true;
}
}
Program:
#include "Sphere.h"
#include "Ray.h"
void renderScene(const Sphere& s);
const int imageWidth = 400;
const int imageHeight = 400;
int main()
{
//Create sphere with center in (0, 0, -20) and with radius 10
Sphere testSphere(glm::vec3(0.0f, 0.0f, -20.0f), 10.0f);
renderScene(testSphere);
return 0;
}
//Shoots rays through each pixel and check if there's an intersection with
//a given sphere. If an intersection exists then the counter is increased.
void renderScene(const Sphere& s)
{
//Ray r(origin, direction)
Ray r(glm::vec3(0.0f), glm::vec3(0.0f));
//Will hold the total amount of intersections
int counter = 0;
//Loops through each pixel...
for(int y=0; y<imageHeight; y++)
{
for(int x=0; x<imageWidth; x++)
{
//Change ray direction for each pixel being processed
r.setDirection(glm::vec3(((x-imageWidth/2)/(float)imageWidth), ((imageHeight/2-y)/(float)imageHeight), -1.0f));
//If current ray intersects sphere...
if(s.intersection(r))
{
//Increase counter
counter++;
}
}
}
std::cout << counter << std::endl;
}
Your second solution (t1) to the quadratic equation is wrong in the case disc > 0, where you need something like:
float sqrt_disc = glm::sqrt(disc);
t0 = (-b - sqrt_disc) / (2 * a);
t1 = (-b + sqrt_disc) / (2 * a);
I think it's best to write out the equation in this form rather than turning the division by 2 into a multiplication by 0.5, because the more the code resembles the mathematics, the easier it is to check.
A few other minor comments:
It seemed confusing to re-use the name disc for sqrt(disc), so I used a new variable name above.
You don't need to test for t0 > t1, since you know that both a and sqrt_disc are positive, and so t1 is always greater than t0.
If the ray origin is inside the sphere, it's possible for t0 to be negative and t1 to be positive. You don't seem to handle this case.
You don't need a special case for disc == 0, as the general case computes the same values as the special case. (And the fewer special cases you have, the easier it is to check your code.)
If I understand your code correctly, you might want to try:
r.setDirection(glm::vec3(((x-imageWidth/2)/(float)imageWidth),
((imageHeight/2-y)/(float)imageHeight),
-1.0f));
Right now, you've positioned the camera one unit away from the screen, but the rays can shoot as much as 400 units to the right and down. This is a very broad field of view. Also, your rays are only sweeping one octent of space. This is why you only get a handful of pixels in the upper-left corner of the screen. The code I wrote above should rectify that.

How to draw an Arc in OpenGL

While making a little Pong game in C++ OpenGL, I decided it'd be fun to create arcs (semi-circles) when stuff bounces. I decided to skip Bezier curves for the moment and just go with straight algebra, but I didn't get far. My algebra follows a simple quadratic function (y = +- sqrt(mx+c)).
This little excerpt is just an example I've yet to fully parameterize, I just wanted to see how it would look. When I draw this, however, it gives me a straight vertical line where the line's tangent line approaches -1.0 / 1.0.
Is this a limitation of the GL_LINE_STRIP style or is there an easier way to draw semi-circles / arcs? Or did I just completely miss something obvious?
void Ball::drawBounce()
{ float piecesToDraw = 100.0f;
float arcWidth = 10.0f;
float arcAngle = 4.0f;
glBegin(GL_LINE_STRIP);
for (float i = 0.0f; i < piecesToDraw; i += 1.0f) // Positive Half
{ float currentX = (i / piecesToDraw) * arcWidth;
glVertex2f(currentX, sqrtf((-currentX * arcAngle)+ arcWidth));
}
for (float j = piecesToDraw; j > 0.0f; j -= 1.0f) // Negative half (go backwards in X direction now)
{ float currentX = (j / piecesToDraw) * arcWidth;
glVertex2f(currentX, -sqrtf((-currentX * arcAngle) + arcWidth));
}
glEnd();
}
Thanks in advance.
What is the purpose of sqrtf((-currentX * arcAngle)+ arcWidth)? When i>25, that expression becomes imaginary. The proper way of doing this would be using sin()/cos() to generate the X and Y coordinates for a semi-circle as stated in your question. If you want to use a parabola instead, the cleaner way would be to calculate y=H-H(x/W)^2