I'm implementing a recursive ray tracer with reflection. The ray tracer is currently reflecting areas that are in shadow, and I don't know why. The shadow aspect of the ray tracer works as expected when the reflective code is commented out, so I don't think that's the issue.
Vec Camera::shade(Vec accumulator,
Ray ray,
vector<Surface*>surfaces,
vector<Light*>lights,
int recursion_depth) {
if (recursion_depth == 0) return Vec(0,0,0);
double closestIntersection = numeric_limits<double>::max();
Surface* cs;
for(unsigned int i=0; i < surfaces.size(); i++){
Surface* s = surfaces[i];
double intersection = s->intersection(ray);
if (intersection > EPSILON && intersection < closestIntersection) {
closestIntersection = intersection;
cs = s;
}
}
if (closestIntersection < numeric_limits<double>::max()) {
Point intersectionPoint = ray.origin + ray.dir*closestIntersection;
Vec intersectionNormal = cs->calculateIntersectionNormal(intersectionPoint);
Material materialToUse = cs->material;
for (unsigned int j=0; j<lights.size(); j++) {
Light* light = lights[j];
Vec dirToLight = (light->origin - intersectionPoint).norm();
Vec dirToCamera = (this->eye - intersectionPoint).norm();
bool visible = true;
for (unsigned int k=0; k<surfaces.size(); k++) {
Surface* s = surfaces[k];
double t = s->intersection(Ray(intersectionPoint, dirToLight));
if (t > EPSILON && t < closestIntersection) {
visible = false;
break;
}
}
if (visible) {
accumulator = accumulator + this->color(dirToLight, intersectionNormal,
intersectionPoint, dirToCamera, light, materialToUse);
}
}
//Reflective ray
//Vec r = d − 2(d · n)n
if (materialToUse.isReflective()) {
Vec d = ray.dir;
Vec r_v = d-intersectionNormal*2*intersectionNormal.dot(d);
Ray r(intersectionPoint+intersectionNormal*EPSILON, r_v);
//km is the ideal specular component of the material, and mult is component-wise multiplication
return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km);
}
else
return accumulator;
}
else
return accumulator;
}
Vec Camera::color(Vec dirToLight,
Vec intersectionNormal,
Point intersectionPoint,
Vec dirToCamera,
Light* light,
Material material) {
//kd I max(0, n · l) + ks I max(0, n · h)p
Vec I(light->r, light->g, light->b);
double dist = (intersectionPoint-light->origin).magnitude();
I = I/(dist*dist);
Vec h = (dirToLight + dirToCamera)/((dirToLight + dirToCamera).magnitude());
Vec kd = material.kd;
Vec ks = material.ks;
Vec diffuse = kd*I*fmax(0.0, intersectionNormal.dot(dirToLight));
Vec specular = ks*I*pow(fmax(0.0, intersectionNormal.dot(h)), material.r);
return diffuse+specular;
}
I've provided my output and the expected output. The lighting looks a bit different b/c mine was originally an .exr file and the other is a .png, but I've drawn arrows in my output where the surface should be reflecting shadows, but it's not.
A couple of things to check:
The visibility check in the inner for loop might be returning a false positive (i.e. it's calculating that all surfaces[k] are not closer to lights[j] than your intersection point, for some j). This would cause it to incorrectly add that light[j]'s contribution to your accumulator. This would result in missing shadows, but it ought to happen everywhere, including your top recursion level, whereas you're only seeing missing shadows in reflections.
There might an error in the color() method that's returning some wrong value that's then being incremented into accumulator. Although without seeing that code, it's hard to know for sure.
You're using postfix decrement on recursion_depth inside the materialToUse.IsReflective() check. Can you verify that the decremented value of recursion_depth is actually being passed to the shade() method call? (And if not, try changing to prefix decrement).
return this->shade(... recursion_depth--)...
EDIT: Can you also verify that recursion_depth is just a parameter to the shade() method, i.e. that there isn't a global / static recursion_depth anywhere. Assuming that there isn't (and there shouldn't be), you can change the call above to
return this->shade(... recursion_depth - 1)...
EDIT 2: A couple of other things to look at:
In color(), I don't understand why you're including the direction to the camera in your calculations. The color of intersections other than the first one, per pixel, ought to be independent of where the camera is. But I doubt that's the cause of this issue.
Verify that return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km); is doing the right thing with that matrix multiplication. Why are you multiplying by materialToUse.km?
Verify that materialToUse.km is constant per surface (i.e. it doesn't change over the geometry of the surface, the depth of iteration, or anything else).
Break up the statement return this->shade(accumulator, r, surfaces, lights, recursion_depth--).mult(materialToUse.km); into its component objects, so you can see the intermediate results in the debugger:
Vec reflectedColor = this->shade(accumulator, r, surfaces, lights, recursion_depth - 1);
Vec multipliedColor = reflectedColor.mult(materialToUse.km);
return multipliedColor;
Determine the image (x, y) coordinates of one of your problematic pixels. Set a conditional breakpoint that's triggered when rendering that pixel, and then step through your shade() method. Assuming you pick the pixel pointed to by the bottom right arrow in your example image, you ought to see one recursion into shade(). Stepping through that the first recurse, you'll see that your code is incorrectly adding the light contribution from the floor, when it should be in shadow.
To answer my own question: I was not checking that the t should be less than the distance from the intersection to light position.
Instead of:
if (t > EPSILON && t < closestIntersection) {
visible = false;
break;
}
it should be:
if (t > EPSILON && t < max_t) {
visible = false;
break;
}
where max_t is
double max_t = dirToLight.magnitude();
before dirToLight has been normalized.
Related
I need to generate a sdf on a grid from a 2D mesh to represent the mesh as a closed body in cinder.
My first approach was to use a distance function (euclidean) to check if a gridpoint is close to a meshpoint and then set the value to - or +, but this resulted in bad resolution. Next I tried to add up distances to get a continuous distance field. which resulted in a blown up object.
I am not sure how to represent the the distance to a closed object described by a mesh (concav or convex). My current approach is described in the code below.
#include <iostream>
#include <fstream>
#include <string>
#include <Eigen/Dense>
#include <vector>
#include <algorithm>
#include <random>
using namespace std;
using namespace Eigen;
typedef Eigen::Matrix<double, 2, 1> Vector2;
typedef Eigen::Matrix<double, 3, 2> Vector32;
typedef std::vector<Vector2, Eigen::aligned_allocator<Vector2> > Vector2List;
typedef std::vector<Eigen::Vector3i, Eigen::aligned_allocator<Eigen::Vector3i> > Vector3iList;
typedef std::vector<Vector32> Vector32List;
typedef Eigen::Array<double, Eigen::Dynamic, Eigen::Dynamic> grid_t;
void f( Vector2List vertices, Vector3iList triangles)
{ // each entry of triangles describe which vertice point belongs
// to a triangle of the mesh
grid_t sdf = grid_t::Zero(resolution, resolution);
for (int x = 0; x < resolution; ++x) {
for (int y = 0; y < resolution; ++y) {
Vector2d pos((x + 0.5) / resolution, (y + 0.5) / resolution);
double dist = 1 / double(resolution*resolution);
double check = 100;
double val = 0;
for (std::vector<Vector2>::iterator mean = vertices.begin(); mean != vertices.end(); ++mean) {
//try sdf with euclidian distance function
check = (pos - *mean).squaredNorm();
if (check < dist) {
val = -1; break;
}
else {
val = 20;
}
}
val *= resolution;
static const double epsilon = 0.01;
if (abs(val) < epsilon) {
val = 0;
numberOfClamped++;
}
sdf(x, y) = val; //
}
}
}
It seems as if you have a slight misunderstanding of what the SDF actually is. So let me start with this.
The Signed Distance Function is a function over 2D space that gives you the distance of the respective point to the closest point on the mesh. The distance is positive for points outside of the mesh and negative for points inside (or the other way around). Naturally, points directly on the mesh will have zero distance. We can represent this function formally as:
sdf(x, y) = distance
This is a continuous function and we need a discrete representation that we can work with. A common choice is to use a uniform grid like the one that you want to use. We then sample the SDF at the grid points. Once we have distance values for all our grid points, we can interpolate the SDF between them to get the SDF everywhere. Note that each sample corresponds to a single point and not an area (e.g., a cell).
With this in mind, let us take a look at your code:
Vector2d pos((x + 0.5) / resolution, (y + 0.5) / resolution);
This depends on how the grid point indices map to global coordinates. It might be correct. However, it looks as if it assumes that sample positions are located in the middle of the respective cells. Again, this might be correct, but I assume the + 0.5 should be left away.
for (std::vector<Vector2>::iterator mean = vertices.begin(); mean != vertices.end(); ++mean)
This is an approximation of the SDF. It calculates the closest vertex of the mesh and not the closest point (which may lie on an edge). For dense meshes, this should be fine. If you have coarse meshes, you should iterate the edges and calculate the closest points on these.
if (check < dist) {
val = -1; break;
} else {
val = 20;
}
I don't really know what this is. As explained above, the value of the SDF is the signed distance. Not some arbitrary value. Also the sign should not correspond to whether the mesh is close to the grid position. So, what you should have done instead is:
if(check < val * val) {
//this point is closer than the current closest point
val = std::sqrt(check); //set to absolute distance
if(*mean is inside the mesh)
val *= -1; //invert the sign
}
And finally, this piece:
val *= resolution;
static const double epsilon = 0.01;
if (abs(val) < epsilon) {
val = 0;
numberOfClamped++;
}
Again, I don't know what this is supposed to do. Just leave it away.
I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong
I'm tasked with optimizing the following ray tracer:
void Scene::RayTrace()
{
for (int v = 0; v < fb->h; v++) // all vertical pixels in framebuffer
{
calculateFPS(); // calculates the current fps and prints it
for (int u = 0; u < fb->w; u++) // all horizontal pixels in framebuffer
{
fb->Set(u, v, 0xFFAAAAAA); // background color
fb->SetZ(u, v, FLT_MAX); // sets the Z values to all be maximum at beginning
V3 ray = (ppc->c + ppc->a*((float)u + .5f) + ppc->b*((float)v + .5f)).UnitVector(); // gets the camera ray
for (int tmi = 0; tmi < tmeshesN; tmi++) // iterates over all triangle meshes
{
if (!tmeshes[tmi]->enabled) // doesn't render a tmesh if it's not set to be enabled
continue;
for (int tri = 0; tri < tmeshes[tmi]->trisN; tri++) // iterates over all triangles in the mesh
{
V3 Vs[3]; // triangle vertices
Vs[0] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 0]];
Vs[1] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 1]];
Vs[2] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 2]];
V3 bgt = ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); // I don't entirely understand what this does
if (bgt[2] < 0.0f || bgt[0] < 0.0f || bgt[1] < 0.0f || bgt[0] + bgt[1] > 1.0f)
continue;
if (fb->zb[(fb->h - 1 - v)*fb->w + u] < bgt[2])
continue;
fb->SetZ(u, v, bgt[2]);
float alpha = 1.0f - bgt[0] - bgt[1];
float beta = bgt[0];
float gamma = bgt[1];
V3 Cs[3]; // triangle vertex colors
Cs[0] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 0]];
Cs[1] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 1]];
Cs[2] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 2]];
V3 color = Cs[0] * alpha + Cs[1] * beta + Cs[2] * gamma;
fb->Set(u, v, color.GetColor()); // sets this pixel accordingly
}
}
}
fb->redraw();
Fl::check();
}
}
Two things:
I don't entirely understand what ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); does. Can anyone explain this, in terms of ray-tracing, to me? Here is the function inside my "Planar Pinhole Camera" class (this function was given to me):
V3 V3::IntersectRayWithTriangleWithThisOrigin(V3 r, V3 Vs[3])
{
M33 m; // 3X3 matrix class
m.SetColumn(0, Vs[1] - Vs[0]);
m.SetColumn(1, Vs[2] - Vs[0]);
m.SetColumn(2, r*-1.0f);
V3 ret; // Vector3 class
V3 &C = *this;
ret = m.Inverse() * (C - Vs[0]);
return ret;
}
The basic steps of this are apparent, I just don't see what it's actually doing.
How would I go about optimizing this ray-tracer from here? I've found something online about "kd trees," but I'm unsure how complex they are. Does anyone have some good resources on simple solutions for optimizing this? I've had some difficulty deciphering what's out there.
Thanks!
Probably the largest optimisation by far would be to use some sort of bounding volume hierarchy. Right now the code intersects all rays with all triangles of all objects. With a BVH, we instead ask: "given this ray, which triangles intersect?" This means that for each ray, you generally only need to test for intersection with a handful of primitives and triangles, rather than every single triangle in the scene.
IntersectRayWithTriangleWithThisOrigin
from the look of it
it creates inverse transform matrix from the triangle edges (triangle basis vectors are X,Y)
do not get the Z axis I would expect the ray direction there and not position of pixel (ray origin)
but can be misinterpreting something
anyway the inverse matrix computation is the biggest problem
you are computing it for each triangle per pixel that is a lot
faster would be having computed inverse transform matrix of each triangle before raytracing (once)
where X,Y are the basis and Z is perpendicular to booth of them facing always the same direction to camera
and then just transform your ray into it and check for limits of intersection
that is just matrix*vector and few ifs instead of inverse matrix computation
another way would be to algebraically solve ray vs. plane intersection
that should lead to much simpler equation then matrix inversion
after that is that just a mater of basis vector bound checking
I've been working on my raytracer again. I added reflection and multithreading support. Currently I am working on adding refractions, but its only half working.
As you can see, there is a center sphere(without specular highlight), a reflecting sphere(to the right) and a refracting sphere(left). I'm pretty happy about reflections, it does look very good. For refractions its kinda working...the light is refracted and all shadows of the spheres are visible in the sphere(refraction index 1.4), but there is an outer black ring.
EDIT: Apparently the black ring gets bigger, and therefore the sphere smaller, when I increase the refraction index of the sphere. On the contrary, when decreasing the index of refraction, the Sphere gets larger and the black ring smaller...until, with index of refraction set to one, the ring totally disappears.
IOR = 1.9
IOR = 1.1
IOR = 1.00001
And interestingly enough at IOR = 1 the sphere loses its transparency and becomes white.
I think I covered total internal reflection and it is not the issue here.
Now the code:
I'm using the operator | for dot product, so (vec|vec) is a dot product and the operator ~ to invert vectors. The objects, both ligths and spheres are stored in Object **objects;.
Raytrace function
Colour raytrace(const Ray &r, const int &depth)
{
//first find the nearest intersection of a ray with an object
Colour finalColour = skyBlue *(r.getDirection()|Vector(0,0,-1)) * SKY_FACTOR;
double t, t_min = INFINITY;
int index_nearObj = -1;
for(int i = 0; i < objSize; i++)
{
if(!dynamic_cast<Light *>(objects[i]))//skip light src
{
t = objects[i]->findParam(r);
if(t > 0 && t < t_min)
{
t_min = t;
index_nearObj = i;
}
}
}
//no intersection
if(index_nearObj < 0)
return finalColour;
Vector intersect = r.getOrigin() + r.getDirection()*t_min;
Vector normal = objects[index_nearObj]->NormalAtIntersect(intersect);
Colour objectColor = objects[index_nearObj]->getColor();
Ray rRefl, rRefr; //reflected and refracted Ray
Colour refl = finalColour, refr = finalColour; //reflected and refracted colours
double reflectance = 0, transmittance = 0;
if(objects[index_nearObj]->isReflective() && depth < MAX_TRACE_DEPTH)
{
//handle reflection
rRefl = objects[index_nearObj]->calcReflectingRay(r, intersect, normal);
refl = raytrace(rRefl, depth + 1);
reflectance = 1;
}
if(objects[index_nearObj]->isRefractive() && depth < MAX_TRACE_DEPTH)
{
//handle transmission
rRefr = objects[index_nearObj]->calcRefractingRay(r, intersect, normal, reflectance, transmittance);
refr = raytrace(rRefr, depth + 1);
}
Ray rShadow; //shadow ray
bool shadowed;
double t_light = -1;
Colour localColour;
Vector tmpv;
//get material properties
double ka = 0.2; //ambient coefficient
double kd; //diffuse coefficient
double ks; //specular coefficient
Colour ambient = ka * objectColor; //ambient component
Colour diffuse, specular;
double brightness;
localColour = ambient;
//look if the object is in shadow or light
//do this by casting a ray from the obj and
// check if there is an intersection with another obj
for(int i = 0; i < objSize; i++)
{
if(dynamic_cast<Light *>(objects[i])) //if object is a light
{
//for each light
shadowed = false;
//create Ray to light
tmpv = objects[i]->getPosition() - intersect;
rShadow = Ray(intersect + (!tmpv) * BIAS, tmpv);
t_light = objects[i]->findParam(rShadow);
if(t_light < 0) //no imtersect, which is quite impossible
continue;
//then we check if that Ray intersects one object that is not a light
for(int j = 0; j < objSize; j++)
{
if(!dynamic_cast<Light *>(objects[j]) && j != index_nearObj)//if obj is not a light
{
t = objects[j]->findParam(rShadow);
//if it is smaller we know the light is behind the object
//--> shadowed by this light
if (t >= 0 && t < t_light)
{
// Set the flag and stop the cycle
shadowed = true;
break;
}
}
}
if(!shadowed)
{
rRefl = objects[index_nearObj]->calcReflectingRay(rShadow, intersect, normal);
//reflected ray from ligh src, for ks
kd = maximum(0.0, (normal|rShadow.getDirection()));
if(objects[index_nearObj]->getShiny() <= 0)
ks = 0;
else
ks = pow(maximum(0.0, (r.getDirection()|rRefl.getDirection())), objects[index_nearObj]->getShiny());
diffuse = kd * objectColor;// * objects[i]->getColour();
specular = ks * objects[i]->getColor();
brightness = 1 /(1 + t_light * DISTANCE_DEPENDENCY_LIGHT);
localColour += brightness * (diffuse + specular);
}
}
}
finalColour = localColour + (transmittance * refr + reflectance * refl);
return finalColour;
}
Now the function that calculates the refracted Ray, I used several different sites for resource, and each had similar algorithms. This is the best I could do so far. It may just be a tiny detail I'm not seeing...
Ray Sphere::calcRefractingRay(const Ray &r, const Vector &intersection,Vector &normal, double & refl, double &trans)const
{
double n1, n2, n;
double cosI = (r.getDirection()|normal);
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;//invert
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
n = n1/n2;
double sinT2 = n*n * (1.0 - cosI * cosI);
double cosT = sqrt(1.0 - sinT2);
//fresnel equations
double rn = (n1 * cosI - n2 * cosT)/(n1 * cosI + n2 * cosT);
double rt = (n2 * cosI - n1 * cosT)/(n2 * cosI + n2 * cosT);
rn *= rn;
rt *= rt;
refl = (rn + rt)*0.5;
trans = 1.0 - refl;
if(n == 1.0)
return r;
if(cosT*cosT < 0.0)//tot inner refl
{
refl = 1;
trans = 0;
return calcReflectingRay(r, intersection, normal);
}
Vector dir = n * r.getDirection() + (n * cosI - cosT)*normal;
return Ray(intersection + dir * BIAS, dir);
}
EDIT: I also changed the refraction index around.From
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
to
if(cosI > 0.0)
{
n1 = getRefrIndex();
n2 = 1.0;
normal = ~normal;
}
else
{
n1 = 1.0;
n2 = getRefrIndex();
cosI = -cosI;
}
Then I get this, and almost the same(still upside down) with an index of refraction at 1!
And the reflection calculation:
Ray Sphere::calcReflectingRay(const Ray &r, const Vector &intersection, const Vector &normal)const
{
Vector rdir = r.getDirection();
Vector dir = rdir - 2 * (rdir|normal) * normal;
return Ray(intersection + dir*BIAS, dir);
//the Ray constructor automatically normalizes directions
}
So my question is: How do I fix the outer black circle? Which version is correct?
Help is greatly appreciated :)
This is compiled on Linux using g++ 4.8.2.
Warning: the following is a guess, not a certainty. I'd have to look at the code in more detail to be sure what's happening and why.
That said, it looks to me like your original code is basically simulating a concave lens instead of convex.
A convex lens is basically a magnifying lens, bringing light rays from a relatively small area into focus on a plane:
This also shows why the corrected code shows an upside-down image. The rays of light coming from the top on one side get projected to the bottom on the other (and vice versa).
Getting back to the concave lens though: a concave lens is a reducing lens that shows a wide angle of picture from in front of the lens:
If you look at the bottom right corner here, it shows what I suspect is the problem: especially with a high index of refraction, the rays of light trying to come into the lens intersect the edge of the lens itself. For all the angles wider than that, you're typically going to see a black ring, because the front edge of the lens is acting as a shade to prevent light from entering.
Increasing the index of refraction increases the width of that black ring, because the light is bent more, so a larger portion at the edges is intersecting the outer edge of the lens.
In case you care about how they avoid this with things like wide-angle camera lenses, the usual route is to use a meniscus lens, at least for the front element:
This isn't a panacea, but does at least prevent incoming light rays from intersecting the outer edge of the front lens element. Depending on exactly how wide an angle the lens needs to cover, it'll often be quite a bit less radical of a meniscus than this (and in some cases it'll be a plano-concave) but you get the general idea.
Final warning: of course, all of these are hand-drawn, and intended only to give general idea, not (for example) reflect the design of any particular lens, an element with any particular index of refraction, etc.
I stumbled across this exact issue as well when working on a ray tracer. #lightxbulb's comment about normalizing the ray direction vector fixed this problem for me.
Firstly, keep your code that computes the refraction indices prior to your edit. In other words, you should be seeing those black rings in your renderings.
Then, in your calcRefractingRay function where you compute cosI, use the dot product of normalize(r.getDirection()) and normal. Currently you're taking the dot product of r.getDirection() and normal.
Secondly, when you compute the refracted ray direction dir, use normalize(r.getDirection()) instead of r.getDirection(). Again, you're currently using
r.getDirection() in your calculation.
Also, there is an issue with the way you're checking for total internal reflection. You should check that the term you're taking the square root of (1.0 - sinT2) is non-negative before actually computing the square root.
Hope that helps!
I'm attempting to determine if a specific point lies inside a polyhedron. In my current implementation, the method I'm working on take the point we're looking for an array of the faces of the polyhedron (triangles in this case, but it could be other polygons later). I've been trying to work from the info found here: http://softsurfer.com/Archive/algorithm_0111/algorithm_0111.htm
Below, you'll see my "inside" method. I know that the nrml/normal thing is kind of weird .. it's the result of old code. When I was running this it seemed to always return true no matter what input I give it. (This is solved, please see my answer below -- this code is working now).
bool Container::inside(Point* point, float* polyhedron[3], int faces) {
Vector* dS = Vector::fromPoints(point->X, point->Y, point->Z,
100, 100, 100);
int T_e = 0;
int T_l = 1;
for (int i = 0; i < faces; i++) {
float* polygon = polyhedron[i];
float* nrml = normal(&polygon[0], &polygon[1], &polygon[2]);
Vector* normal = new Vector(nrml[0], nrml[1], nrml[2]);
delete nrml;
float N = -((point->X-polygon[0][0])*normal->X +
(point->Y-polygon[0][1])*normal->Y +
(point->Z-polygon[0][2])*normal->Z);
float D = dS->dot(*normal);
if (D == 0) {
if (N < 0) {
return false;
}
continue;
}
float t = N/D;
if (D < 0) {
T_e = (t > T_e) ? t : T_e;
if (T_e > T_l) {
return false;
}
} else {
T_l = (t < T_l) ? t : T_l;
if (T_l < T_e) {
return false;
}
}
}
return true;
}
This is in C++ but as mentioned in the comments, it's really very language agnostic.
The link in your question has expired and I could not understand the algorithm from your code. Assuming you have a convex polyhedron with counterclockwise oriented faces (seen from outside), it should be sufficient to check that your point is behind all faces. To do that, you can take the vector from the point to each face and check the sign of the scalar product with the face's normal. If it is positive, the point is behind the face; if it is zero, the point is on the face; if it is negative, the point is in front of the face.
Here is some complete C++11 code, that works with 3-point faces or plain more-point faces (only the first 3 points are considered). You can easily change bound to exclude the boundaries.
#include <vector>
#include <cassert>
#include <iostream>
#include <cmath>
struct Vector {
double x, y, z;
Vector operator-(Vector p) const {
return Vector{x - p.x, y - p.y, z - p.z};
}
Vector cross(Vector p) const {
return Vector{
y * p.z - p.y * z,
z * p.x - p.z * x,
x * p.y - p.x * y
};
}
double dot(Vector p) const {
return x * p.x + y * p.y + z * p.z;
}
double norm() const {
return std::sqrt(x*x + y*y + z*z);
}
};
using Point = Vector;
struct Face {
std::vector<Point> v;
Vector normal() const {
assert(v.size() > 2);
Vector dir1 = v[1] - v[0];
Vector dir2 = v[2] - v[0];
Vector n = dir1.cross(dir2);
double d = n.norm();
return Vector{n.x / d, n.y / d, n.z / d};
}
};
bool isInConvexPoly(Point const& p, std::vector<Face> const& fs) {
for (Face const& f : fs) {
Vector p2f = f.v[0] - p; // f.v[0] is an arbitrary point on f
double d = p2f.dot(f.normal());
d /= p2f.norm(); // for numeric stability
constexpr double bound = -1e-15; // use 1e15 to exclude boundaries
if (d < bound)
return false;
}
return true;
}
int main(int argc, char* argv[]) {
assert(argc == 3+1);
char* end;
Point p;
p.x = std::strtod(argv[1], &end);
p.y = std::strtod(argv[2], &end);
p.z = std::strtod(argv[3], &end);
std::vector<Face> cube{ // faces with 4 points, last point is ignored
Face{{Point{0,0,0}, Point{1,0,0}, Point{1,0,1}, Point{0,0,1}}}, // front
Face{{Point{0,1,0}, Point{0,1,1}, Point{1,1,1}, Point{1,1,0}}}, // back
Face{{Point{0,0,0}, Point{0,0,1}, Point{0,1,1}, Point{0,1,0}}}, // left
Face{{Point{1,0,0}, Point{1,1,0}, Point{1,1,1}, Point{1,0,1}}}, // right
Face{{Point{0,0,1}, Point{1,0,1}, Point{1,1,1}, Point{0,1,1}}}, // top
Face{{Point{0,0,0}, Point{0,1,0}, Point{1,1,0}, Point{1,0,0}}}, // bottom
};
std::cout << (isInConvexPoly(p, cube) ? "inside" : "outside") << std::endl;
return 0;
}
Compile it with your favorite compiler
clang++ -Wall -std=c++11 code.cpp -o inpoly
and test it like
$ ./inpoly 0.5 0.5 0.5
inside
$ ./inpoly 1 1 1
inside
$ ./inpoly 2 2 2
outside
If your mesh is concave, and not necessarily watertight, that’s rather hard to accomplish.
As a first step, find the point on the surface of the mesh closest to the point. You need to keep track the location, and specific feature: whether the closest point is in the middle of face, on the edge of the mesh, or one of the vertices of the mesh.
If the feature is face, you’re lucky, can use windings to find whether it’s inside or outside. Compute normal to face (don't even need to normalize it, non-unit-length will do), then compute dot( normal, pt - tri[0] ) where pt is your point, tri[0] is any vertex of the face. If the faces have consistent winding, the sign of that dot product will tell you if it’s inside or outside.
If the feature is edge, compute normals to both faces (by normalizing a cross-product), add them together, use that as a normal to the mesh, and compute the same dot product.
The hardest case is when a vertex is the closest feature. To compute mesh normal at that vertex, you need to compute sum of the normals of the faces sharing that vertex, weighted by 2D angles of that face at that vertex. For example, for vertex of cube with 3 neighbor triangles, the weights will be Pi/2. For vertex of a cube with 6 neighbor triangles the weights will be Pi/4. And for real-life meshes the weights will be different for each face, in the range [ 0 .. +Pi ]. This means you gonna need some inverse trigonometry code for this case to compute the angle, probably acos().
If you want to know why that works, see e.g. “Generating Signed Distance Fields From Triangle Meshes” by J. Andreas Bærentzen and Henrik Aanæs.
I have already answered this question couple years ago. But since that time I’ve discovered much better algorithm. It was invented in 2018, here’s the link.
The idea is rather simple. Given that specific point, compute a sum of signed solid angles of all faces of the polyhedron as viewed from that point. If the point is outside, that sum gotta be zero. If the point is inside, that sum gotta be ±4·π steradians, + or - depends on the winding order of the faces of the polyhedron.
That particular algorithm is packing the polyhedron into a tree, which dramatically improves performance when you need multiple inside/outside queries for the same polyhedron. The algorithm only computes solid angles for individual faces when the face is very close to the query point. For large sets of faces far away from the query point, the algorithm is instead using an approximation of these sets, using some numbers they keep in the nodes of that BVH tree they build from the source mesh.
With limited precision of FP math, and if using that approximated BVH tree losses from the approximation, that angle will never be exactly 0 nor ±4·π. But still, the 2·π threshold works rather well in practice, at least in my experience. If the absolute value of that sum of solid angles is less than 2·π, consider the point to be outside.
It turns out that the problem was my reading of the algorithm referenced in the link above. I was reading:
N = - dot product of (P0-Vi) and ni;
as
N = - dot product of S and ni;
Having changed this, the code above now seems to work correctly. (I'm also updating the code in the question to reflect the correct solution).