Raytracing Reflection distortion - c++

I've started coding a raytracer, but today I encounter a problem when dealing with reflection.
First, here is an image of the problem:
I only computed the object's reflected color (so no light effect is applied on the reflected object)
The problem is that distortion that I really don't understand.
I looked at the angle between my rayVector and the normalVector and it looks ok, the reflected vector also looks fine.
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyNormal = normal;
Vector copyView = ray;
copyNormal.makeUnit();
copyView.makeUnit();
cosAngle = copyView.scale(copyNormal);
return (-2.0 * cosAngle * normal + ray);
}
So for example when my ray is hitting the bottom of my sphere I have the following values:
cos: 1
ViewVector: [185.869,-2.44308,-26.3504]
NormalVector: [185.869,-2.44308,-26.3504]
ReflectedVector: [-185.869,2.44308,26.3504]
Bellow if the code that handles the reflection:
Color Rt::getReflectedColor(std::shared_ptr<SceneObj> obj, Camera camera,
Vector rayVec, double k, unsigned int pass) {
if (pass > 10)
return obj->getColor();
if (obj->getReflectionIndex() == 0) {
// apply effects
return obj->getColor();
}
Color cuColor(obj->getColor());
Color newColor(0);
Math math;
Vector view;
Vector normal;
Vector reflected;
Position impact;
std::pair<std::shared_ptr<SceneObj>, double> reflectedObj;
normal = math.calcNormalVector(camera.pos, obj, rayVec, k, impact);
view = Vector(impact.x, impact.y, impact.z) -
Vector(camera.pos.x, camera.pos.y, camera.pos.z);
reflected = math.calcReflectedVector(view, normal);
reflectedObj = this->getClosestObj(reflected, Camera(impact));
if (reflectedObj.second <= 0) {
cuColor.mix(0x000000, obj->getReflectionIndex());
return cuColor;
}
newColor = this->getReflectedColor(reflectedObj.first, Camera(impact),
reflected, reflectedObj.second, pass + 1);
// apply effects
cuColor.mix(newColor, obj->getReflectionIndex());
return newColor;
}
To calculate the normal and the reflected Vector:
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyRay = ray;
copyRay.makeUnit();
cosAngle = copyRay.scale(normal);
return (-2.0 * cosAngle * normal + copyRay);
}
Vector Math::calcNormalVector(Position pos, std::shared_ptr<SceneObj> obj,
Vector rayVec, double k, Position& impact) const {
const Position &objPos = obj->getPosition();
Vector normal;
impact.x = pos.x + k * rayVec.x;
impact.y = pos.y + k * rayVec.y;
impact.z = pos.z + k * rayVec.z;
obj->calcNormal(normal, impact);
return normal;
}
[EDIT1]
I have a new image, i removed the plane only to keep the spheres:
As you can see there is blue and yellow on the border of the sphere.
Thanks to neam I colored the sphere applying the following formula:
newColor.r = reflected.x * 127.0 + 127.0;
newColor.g = reflected.y * 127.0 + 127.0;
newColor.b = reflected.z * 127.0 + 127.0;
Bellow is the visual result:
Ask me if you need any information.
Thanks in advance

There are many little things with the example you provided. This may -- or may not -- answer your question, but as I suppose you're doing a raytracer for learning purposes (either at school or in your free time) I'll give you some hints.
you have two classes Vector and Position. It may well seems like it's a good idea, but why not seeing the position as the translation vector from the origin ? This would avoid some code duplication I think (except if you've done something like using Position = Vector;). You may also want to look at some libraries that does all the mathematical things for you (like glm could do). (and this way, you'll avoid some errors like naming your dot function scale())
you create a camera from the position (that is a really strange thing). Reflections doesn't involve any camera. In a typical raytracer, you have one camera {position + direction + fov + ...} and for each pixels of your image/reflections/refractions/..., you cast rays {origin + direction} (thus the name raytracer, which isn't cameratracer). The Camera class is usually tied to the concept of physical camera with things like focal, depth of field, aperture, chromatic aberration, ... whereas the ray is simply... a ray. (could be a ray from the plane where the output image is mapped to the first object, or a ray created from reflection, diffraction, scattering, ...).
and for the final point, I think that your error may comes from the Math::calcNormalVector(...) function. For a sphere at a position P and for an intersection point I, the normal N is: N = normalize(I - P);.
EDIT: seems like your problem comes from the Rt::getClosestObj. Everything else is looking fine
There's ton a websites/blogs/educative content online about creating a simple raytracer, so for the first two points I let them teach you. Take a look at glm.
If don't figure out what is wrong with calcNormalVector(...) please post its code :)

Did that works ?
I assume that your ray and normal vector are already normalized.
Vector Math::reflect(const Vector &ray, const Vector &normal) const
{
return ray - 2.0 * Math::dot(normal, ray) * normal;
}
Moreover, I can't understand with your provided code this call :
this->getClosestObj(reflected, Camera(obj->getPosition()));
That should be something like that no ?
this->getClosestObj(reflected, Camera(impact));

Related

Problem implementing reflections in ray tracing

I've implemented a simple ray tracer and now I'm trying to implement reflections but objects are behaving as transparent.
Here is my code for getting the reflected ray.
ray* reflected = new ray();
reflected->direction = rayIn.direction - (2 * glm::dot(rayIn.direction, normal)) * normal;
reflected->origin = int_point + epsilon * normal;
outColor += ((int_object->reflectivity)*intersectray(*reflected, depth - 1));
Here are images With code:
Without code:
I'll edit the post if more code is needed.
Edit : It seems the problem is when I'm iterating through the objects in the scene. I insert the objects as
scene->add(sphere1);
scene->add(sphere2);
But when I change this to :
scene->add(sphere2);
scene->add(sphere1);
the output is correct.
Sphere 1 is at closer to camera than sphere 2 and they are not overlapping.
Problem was this part of code
for (objects in scene){
double intersection = (*objIterator)->intersect(rayIn,normal);
if (intersection < minDistance && intersection > epsilon )
{
minDistance = intersection;
int_object = *objIterator;
int_point = rayIn.origin + intersection * rayIn.direction + (epsilon * normal);
}}
Here normal is used later for other calculations but the first line update normal for current object intersection (Even if its not close). So I added a vector to store normal of the intersection object and used it later.

C++ raytracer bug

I've written a raytracer in C++. This is the snippet for calculating the diffuse component:
//diffuse component
color diffuse(0, 0, 0);
if (intrs.mat.diffuseness > 0)
{
for (auto &light : lights)
{
//define ray from hit object to light
ray light_dir(intrs.point, (light->point - intrs.point).normalize());
double nl = light_dir.direction*intrs.normal; //dot product
double diminish_coeff = 1.0;
double dist = intrs.point.sqrDistance(light->point);
//check whether it reaches the light
if (nl > 0)
{
for (int i = 0; i < (int)shapes.size(); ++i)
{
shape::intersection temp_intrs(shapes[i]->intersect(light_dir, shapes[i]->interpolate_normals));
if (temp_intrs.valid && temp_intrs.point.sqrDistance(intrs.point) < dist)
{
diminish_coeff *= shadow_darkness;
break;
}
}
}
diffuse += intrs.mat.diffuseness * intrs.mat.col * light->light_color * light->light_intensity * nl*diminish_coeff;
}
}
Of course, I can't post the entire code, but I think it should be clear what I'm doing here - intrs is the current intersection of a ray and object and shapes is a vector of all objects in the scene.
Colors are represented as RGB in the (0,1) range. Addition and multiplication of colors are simple memberwise addition and multiplication. Only when the raytracing is over, and I want to write into the image file, I multiply my colors by 255 and clamp to 255 if a component is larger than that.
Currently, there is one point light in the scene and it's white: color(1,1,1), intensity = 1.0.
This is my rendered image:
So, this is not right - the cupboard on the left is supposed to be green, and the box is supposed to be red.
Is there something obviously wrong with my implementation? I can't seem to figure it out. I'll post some more code if necessary.
It seems that your diffuse += line should be inside the if (nl > 0) condition, not outside it.
I found the problem. For some reason, my intrs.normal vector wasn't normalized. Thank you everyone for your help.

How to handle incorrect index calculation for discretized ray tracing?

The situation si as follows. I am trying to implement a linear voxel search in a glsl shader for efficient voxel ray tracing. In toehr words, I have a 3D texture and I am ray tracing on it but I am trying to ray trace such that I only ever check voxels intersected by the ray once.
To this effect I have written a program with the following results:
Not efficient but correct:
The above image was obtained by adding a small epsilon ray multiple times and sampling from the texture on each iteration. Which produces the correct results but it's very inefficient.
That would look like:
loop{
start += direction*0.01;
sample(start);
}
To make it efficient I decided to instead implement the following lookup function:
float bound(float val)
{
if(val >= 0)
return voxel_size;
return 0;
}
float planeIntersection(vec3 ray, vec3 origin, vec3 n, vec3 q)
{
n = normalize(n);
if(dot(ray,n)!=0)
return (dot(q,n)-dot(n,origin))/dot(ray,n);
return -1;
}
vec3 get_voxel(vec3 start, vec3 direction)
{
direction = normalize(direction);
vec3 discretized_pos = ivec3((start*1.f/(voxel_size))) * voxel_size;
vec3 n_x = vec3(sign(direction.x), 0,0);
vec3 n_y = vec3(0, sign(direction.y),0);
vec3 n_z = vec3(0, 0,sign(direction.z));
float bound_x, bound_y, bound_z;
bound_x = bound(direction.x);
bound_y = bound(direction.y);
bound_z = bound(direction.z);
float t_x, t_y, t_z;
t_x = planeIntersection(direction, start, n_x,
discretized_pos+vec3(bound_x,0,0));
t_y = planeIntersection(direction, start, n_y,
discretized_pos+vec3(0,bound_y,0));
t_z = planeIntersection(direction, start, n_z,
discretized_pos+vec3(0,0,bound_z));
if(t_x < 0)
t_x = 1.f/0.f;
if(t_y < 0)
t_y = 1.f/0.f;
if(t_z < 0)
t_z = 1.f/0.f;
float t = min(t_x, t_y);
t = min(t, t_z);
return start + direction*t;
}
Which produces the following result:
Notice the triangle aliasing on the left side of some surfaces.
It seems this aliasing occurs because some coordinates are not being set to their correct voxel.
For example modifying the truncation part as follows:
vec3 discretized_pos = ivec3((start*1.f/(voxel_size)) - vec3(0.1)) * voxel_size;
Creates:
So it has fixed the issue for some surfaces and caused it for others.
I wanted to know if there is a way in which I can correct this truncation so that this error does not happen.
Update:
I have narrowed down the issue a bit. Observe the following image:
The numbers represent the order in which I expect the boxes to be visited.
As you can see for some of the points the sampling of the fifth box seems to be ommitted.
The following is the sampling code:
vec4 grabVoxel(vec3 pos)
{
pos *= 1.f/base_voxel_size;
pos.x /= (width-1);
pos.y /= (depth-1);
pos.z /= (height-1);
vec4 voxelVal = texture(voxel_map, pos);
return voxelVal;
}
yep that was the +/- rounding I was talking about in my comments somewhere in your previous questions related to this. What you need to do is having step equal to grid size in one of the axises (and test 3 times once for |dx|=1 then for |dy|=1 and lastly |dz|=1).
Also you should create a debug draw 2D slice through your map to actually see where the hits for a single specific test ray occurred. Now based on direction of ray in each axis you set the rounding rules separately. Without this you are just blindly patching one case and corrupting other two ...
Now actually Look at this (I linked it to your before but you clearly did not):
Wolf and Doom ray casting techniques
especially pay attention to:
On the right It shows you how to compute the ray step (your epsilon). You simply scale the ray direction so one of the coordinate is +/-1. For simplicity start with 2D slice through your map. The red dot is ray start position. Green is ray step vector for vertical grid lines hits and red is for horizontal grid lines hits (z will be analogically the same).
Now you should add the 2D overview of your map through some height slice that is visible (like on the image on the left) add a dot or marker to each intersection detected but distinguish between x,y and z hits by color. Do this for single ray only (I use the center of view ray). Fist handle view when you look at X+ directions than X- and when done move to Y,Z ...
In my GLSL volumetric 3D back raytracer I also linked you before look at these lines:
if (dir.x<0.0) { p+=dir*(((floor(p.x*n)-_zero)*_n)-ray_pos.x)/dir.x; nnor=vec3(+1.0,0.0,0.0); }
if (dir.x>0.0) { p+=dir*((( ceil(p.x*n)+_zero)*_n)-ray_pos.x)/dir.x; nnor=vec3(-1.0,0.0,0.0); }
if (dir.y<0.0) { p+=dir*(((floor(p.y*n)-_zero)*_n)-ray_pos.y)/dir.y; nnor=vec3(0.0,+1.0,0.0); }
if (dir.y>0.0) { p+=dir*((( ceil(p.y*n)+_zero)*_n)-ray_pos.y)/dir.y; nnor=vec3(0.0,-1.0,0.0); }
if (dir.z<0.0) { p+=dir*(((floor(p.z*n)-_zero)*_n)-ray_pos.z)/dir.z; nnor=vec3(0.0,0.0,+1.0); }
if (dir.z>0.0) { p+=dir*((( ceil(p.z*n)+_zero)*_n)-ray_pos.z)/dir.z; nnor=vec3(0.0,0.0,-1.0); }
they are how I did this. As you can see I use different rounding/flooring rule for each of the 6 cases. This way you handle case without corrupting the other. The rounding rule depends on a lot of stuff like how is your coordinate system offseted to (0,0,0) and more so it might be different in your code but the if conditions should be the same. Also as you can see I am handling this by offsetting the ray start position a bit instead of having these conditions inside the ray traversal loop castray.
That macro cast ray and look for intersections with grid and on top of that actually zsorts the intersections and use the first valid one (that is what l,ll are for and no other conditions or combination of ray results are needed). So my way of deal with this is cast ray for each type of intersection (x,y,z) starting on the first intersection with the grid for the same axis. You need to take into account the starting offset so the l,ll resembles the intersection distance to real start of ray not to offseted one ...
Also a good idea is to do this on CPU side first and when 100% working port to GLSL as in GLSL is very hard to debug things like this.

How to properly handle refraction in raytracing

I am currently working on a raytracer just for fun and I have trouble with the refraction handling.
The code source of the whole raytracer can be found on Github EDIT: The code migrated to Gitlab.
Here is an image of the render:
The right sphere is set to have a refraction indice of 1.5 (glass).
On top of the refraction, I want to handle a "transparency" coefficient which is defined as such :
0 --> Object is 100% opaque
1 --> Object is 100% transparent (no trace of the original object's color)
This sphere has a transparency of 1.
Here is the code handling the refraction part. It can be found on github here.
Color handleTransparency(const Scene& scene,
const Ray& ray,
const IntersectionData& data,
uint8 depth)
{
Ray refracted(RayType::Transparency, data.point, ray.getDirection());
Float_t eta = data.material->getRefraction();
if (eta != 1 && eta > Globals::Epsilon)
refracted.setDirection(Tools::Refract(ray.getDirection(), data.normal, eta));
refracted.setOrigin(data.point + Globals::Epsilon * refracted.getDirection());
return inter(scene, refracted, depth + 1);
}
// http://graphics.stanford.edu/courses/cs148-10-summer/docs/2006--degreve--reflection_refraction.pdf
Float_t getFresnelReflectance(const IntersectionData& data, const Ray& ray)
{
Float_t n = data.material->getRefraction();
Float_t cosI = -Tools::DotProduct(ray.getDirection(), data.normal);
Float_t sin2T = n * n * (Float_t(1.0) - cosI * cosI);
if (sin2T > 1.0)
return 1.0;
using std::sqrt;
Float_t cosT = sqrt(1.0 - sin2T);
Float_t rPer = (n * cosI - cosT) / (n * cosI + cosT);
Float_t rPar = (cosI - n * cosT) / (cosI + n * cosT);
return (rPer * rPer + rPar * rPar) / Float_t(2.0);
}
Color handleReflectionAndRefraction(const Scene& scene,
const Ray& ray,
const IntersectionData& data,
uint8 depth)
{
bool hasReflexion = data.material->getReflexion() > Globals::Epsilon;
bool hasTransparency = data.material->getTransparency() > Globals::Epsilon;
if (!(hasReflexion || hasTransparency) || depth >= MAX_DEPTH)
return 0;
Float_t reflectance = data.material->getReflexion();
Float_t transmittance = data.material->getTransparency();
Color reflexion;
Color transparency;
if (hasReflexion && hasTransparency)
{
reflectance = getFresnelReflectance(data, ray);
transmittance = 1.0 - reflectance;
}
if (hasReflexion)
reflexion = handleReflection(scene, ray, data, depth) * reflectance;
if (hasTransparency)
transparency = handleTransparency(scene, ray, data, depth) * transmittance;
return reflexion + transparency;
}
Tools::Refract is simply calling glm::refract internally. (So that I can change easily if I want)
I don't handle notions of n1 and n2: n2 is considered to always be 1 for air.
Am I mising something obvious ?
EDIT
After adding a way to know if a ray is inside an object (and negating the normal if so) I have this :
While looking around to find help, I stumbled upon this post but I don't think the answer answers anything. By reading it, I don't understand what I'm supposed to do at all.
EDIT 2
I've tried a lot of things and I am currently at this point :
It's better but I'm still not sure if it's right. I'm using this image as an inspiration :
But this one is using two indexes of refraction (To be closer to reality) while I want to simplify and always consider air as the second (in or out) material.
What I essentially changed in my code is here :
inline Vec_t Refract(Vec_t v, const IntersectionData& data, Float_t eta)
{
Float_t n = eta;
if (data.isInside)
n = 1.0 / n;
double cosI = Tools::DotProduct(v, data.normal);
return v * n - data.normal * (-cosI + n * cosI);
}
Here is another view of the same spheres :
EDIT: I've figured that the previous version of this was not entirely correct so I edit the answer.
After reading all the comments, the new versions of the question and doing some experimentation myself I produced the following version of refract routine:
float3 refract(float3 i, float3 n, float eta)
{
eta = 2.0f - eta;
float cosi = dot(n, i);
float3 o = (i * eta - n * (-cosi + eta * cosi));
return o;
}
This time calling it does not require any additional operations:
float3 refr = refract(rayDirection, normal, refrIdx);
The only thing I am still not sure is the inverting of the refractive index when doing the inside ray intersection. In my test the produced image haven't differ much no matter I inverted the index or not.
Below some images with different indices:
For more images see the link, because the site do not allow me to put more of them here.
I am answering this as a physicist rather than a programmer as haven't had time to read all the code so won't be giving the code to do the fix just the general idea.
From what you have said above the black ring is for when n_object is less than n_air. This is only usually true if you are inside an object say if you were inside water or the like but materials have been constructed with weird properties like that and it should be supported.
In this type of situation there are rays of light that can't be diffracted as the diffraction formula put the refracted ray on the SAME side of the interface between the materials, which obviously doesn't make sense as diffraction. In this situation the surface will instead act like it's a reflective surface. This is the situation that is often referred to as total internal reflection.
If being fully exact then almost ever refractive object will also partially reflective too and the fraction of light that is reflected or transmitted (and therefore refracted) is given by the Fresnel equations. For this case though it would still be a good approximation to just treat is as reflective if the angle is too far and transmitting (and therefore refractive) otherwise.
Also there are situations where this black ring effect can be seen if reflection is not possible (due to it being dark in those directions) but light that is transmitted being possible. This could be done by say taking a tube of card that fits tightly to the edge of the object and is pointed directly away and only shining light inside the tube not outside.

OpenGL ray casting (picking): account for object's transform

For picking objects, I've implemented a ray casting algorithm similar to what's described here. After converting the mouse click to a ray (with origin and direction) the next task is to intersect this ray with all triangles in the scene to determine hit points for each mesh.
I have also implemented the triangle intersection test algorithm based on the one described here. My question is, how should we account for the objects' transforms when performing the intersection? Obviously, I don't want to apply the transformation matrix to all vertices and then do the intersection test (too slow).
EDIT:
Here is the UnProject implementation I'm using (I'm using OpenTK by the way). I compared the results, they match what GluUnProject gives me:
private Vector3d UnProject(Vector3d screen)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4d pos = new Vector4d();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4d pos2 = Vector4d.Transform(pos, Matrix4d.Invert(GetModelViewMatrix() * GetProjectionMatrix()));
Vector3d pos_out = new Vector3d(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
Then I'm using this function to create a ray (with origin and direction):
private Ray ScreenPointToRay(Point mouseLocation)
{
Vector3d near = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 0));
Vector3d far = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 1));
Vector3d origin = near;
Vector3d direction = (far - near).Normalized();
return new Ray(origin, direction);
}
You can apply the reverse transformation of each object to the ray instead.
I don't know if this is the best/most efficient approach, but I recently implemented something similar like this:
In world space, the origin of the ray is the camera position. In order to get the direction of the ray, I assumed the user had clicked on the near plane of the camera and thus applied the 'reverse transformation' - from screen space to world space - to the screen space position
( mouseClick.x, viewportHeight - mouseClick.y, 0 )
and then subtracted the origin of the ray, i.e. the camera position, from
the now transformed mouse click position.
In my case, there was no object-specific transformation, meaning I was done once I had my ray in world space. However, transforming origin & direction with the inverse model matrix would have been easy enough after that.
You mentioned that you tried to apply the reverse transformation, but that it didn't work - maybe there's a bug in there? I used a GLM - i.e. glm::unProject - for this.