Problem implementing reflections in ray tracing - c++

I've implemented a simple ray tracer and now I'm trying to implement reflections but objects are behaving as transparent.
Here is my code for getting the reflected ray.
ray* reflected = new ray();
reflected->direction = rayIn.direction - (2 * glm::dot(rayIn.direction, normal)) * normal;
reflected->origin = int_point + epsilon * normal;
outColor += ((int_object->reflectivity)*intersectray(*reflected, depth - 1));
Here are images With code:
Without code:
I'll edit the post if more code is needed.
Edit : It seems the problem is when I'm iterating through the objects in the scene. I insert the objects as
scene->add(sphere1);
scene->add(sphere2);
But when I change this to :
scene->add(sphere2);
scene->add(sphere1);
the output is correct.
Sphere 1 is at closer to camera than sphere 2 and they are not overlapping.

Problem was this part of code
for (objects in scene){
double intersection = (*objIterator)->intersect(rayIn,normal);
if (intersection < minDistance && intersection > epsilon )
{
minDistance = intersection;
int_object = *objIterator;
int_point = rayIn.origin + intersection * rayIn.direction + (epsilon * normal);
}}
Here normal is used later for other calculations but the first line update normal for current object intersection (Even if its not close). So I added a vector to store normal of the intersection object and used it later.

Related

C++ raytracer bug

I've written a raytracer in C++. This is the snippet for calculating the diffuse component:
//diffuse component
color diffuse(0, 0, 0);
if (intrs.mat.diffuseness > 0)
{
for (auto &light : lights)
{
//define ray from hit object to light
ray light_dir(intrs.point, (light->point - intrs.point).normalize());
double nl = light_dir.direction*intrs.normal; //dot product
double diminish_coeff = 1.0;
double dist = intrs.point.sqrDistance(light->point);
//check whether it reaches the light
if (nl > 0)
{
for (int i = 0; i < (int)shapes.size(); ++i)
{
shape::intersection temp_intrs(shapes[i]->intersect(light_dir, shapes[i]->interpolate_normals));
if (temp_intrs.valid && temp_intrs.point.sqrDistance(intrs.point) < dist)
{
diminish_coeff *= shadow_darkness;
break;
}
}
}
diffuse += intrs.mat.diffuseness * intrs.mat.col * light->light_color * light->light_intensity * nl*diminish_coeff;
}
}
Of course, I can't post the entire code, but I think it should be clear what I'm doing here - intrs is the current intersection of a ray and object and shapes is a vector of all objects in the scene.
Colors are represented as RGB in the (0,1) range. Addition and multiplication of colors are simple memberwise addition and multiplication. Only when the raytracing is over, and I want to write into the image file, I multiply my colors by 255 and clamp to 255 if a component is larger than that.
Currently, there is one point light in the scene and it's white: color(1,1,1), intensity = 1.0.
This is my rendered image:
So, this is not right - the cupboard on the left is supposed to be green, and the box is supposed to be red.
Is there something obviously wrong with my implementation? I can't seem to figure it out. I'll post some more code if necessary.
It seems that your diffuse += line should be inside the if (nl > 0) condition, not outside it.
I found the problem. For some reason, my intrs.normal vector wasn't normalized. Thank you everyone for your help.

Raytracing Reflection distortion

I've started coding a raytracer, but today I encounter a problem when dealing with reflection.
First, here is an image of the problem:
I only computed the object's reflected color (so no light effect is applied on the reflected object)
The problem is that distortion that I really don't understand.
I looked at the angle between my rayVector and the normalVector and it looks ok, the reflected vector also looks fine.
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyNormal = normal;
Vector copyView = ray;
copyNormal.makeUnit();
copyView.makeUnit();
cosAngle = copyView.scale(copyNormal);
return (-2.0 * cosAngle * normal + ray);
}
So for example when my ray is hitting the bottom of my sphere I have the following values:
cos: 1
ViewVector: [185.869,-2.44308,-26.3504]
NormalVector: [185.869,-2.44308,-26.3504]
ReflectedVector: [-185.869,2.44308,26.3504]
Bellow if the code that handles the reflection:
Color Rt::getReflectedColor(std::shared_ptr<SceneObj> obj, Camera camera,
Vector rayVec, double k, unsigned int pass) {
if (pass > 10)
return obj->getColor();
if (obj->getReflectionIndex() == 0) {
// apply effects
return obj->getColor();
}
Color cuColor(obj->getColor());
Color newColor(0);
Math math;
Vector view;
Vector normal;
Vector reflected;
Position impact;
std::pair<std::shared_ptr<SceneObj>, double> reflectedObj;
normal = math.calcNormalVector(camera.pos, obj, rayVec, k, impact);
view = Vector(impact.x, impact.y, impact.z) -
Vector(camera.pos.x, camera.pos.y, camera.pos.z);
reflected = math.calcReflectedVector(view, normal);
reflectedObj = this->getClosestObj(reflected, Camera(impact));
if (reflectedObj.second <= 0) {
cuColor.mix(0x000000, obj->getReflectionIndex());
return cuColor;
}
newColor = this->getReflectedColor(reflectedObj.first, Camera(impact),
reflected, reflectedObj.second, pass + 1);
// apply effects
cuColor.mix(newColor, obj->getReflectionIndex());
return newColor;
}
To calculate the normal and the reflected Vector:
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyRay = ray;
copyRay.makeUnit();
cosAngle = copyRay.scale(normal);
return (-2.0 * cosAngle * normal + copyRay);
}
Vector Math::calcNormalVector(Position pos, std::shared_ptr<SceneObj> obj,
Vector rayVec, double k, Position& impact) const {
const Position &objPos = obj->getPosition();
Vector normal;
impact.x = pos.x + k * rayVec.x;
impact.y = pos.y + k * rayVec.y;
impact.z = pos.z + k * rayVec.z;
obj->calcNormal(normal, impact);
return normal;
}
[EDIT1]
I have a new image, i removed the plane only to keep the spheres:
As you can see there is blue and yellow on the border of the sphere.
Thanks to neam I colored the sphere applying the following formula:
newColor.r = reflected.x * 127.0 + 127.0;
newColor.g = reflected.y * 127.0 + 127.0;
newColor.b = reflected.z * 127.0 + 127.0;
Bellow is the visual result:
Ask me if you need any information.
Thanks in advance
There are many little things with the example you provided. This may -- or may not -- answer your question, but as I suppose you're doing a raytracer for learning purposes (either at school or in your free time) I'll give you some hints.
you have two classes Vector and Position. It may well seems like it's a good idea, but why not seeing the position as the translation vector from the origin ? This would avoid some code duplication I think (except if you've done something like using Position = Vector;). You may also want to look at some libraries that does all the mathematical things for you (like glm could do). (and this way, you'll avoid some errors like naming your dot function scale())
you create a camera from the position (that is a really strange thing). Reflections doesn't involve any camera. In a typical raytracer, you have one camera {position + direction + fov + ...} and for each pixels of your image/reflections/refractions/..., you cast rays {origin + direction} (thus the name raytracer, which isn't cameratracer). The Camera class is usually tied to the concept of physical camera with things like focal, depth of field, aperture, chromatic aberration, ... whereas the ray is simply... a ray. (could be a ray from the plane where the output image is mapped to the first object, or a ray created from reflection, diffraction, scattering, ...).
and for the final point, I think that your error may comes from the Math::calcNormalVector(...) function. For a sphere at a position P and for an intersection point I, the normal N is: N = normalize(I - P);.
EDIT: seems like your problem comes from the Rt::getClosestObj. Everything else is looking fine
There's ton a websites/blogs/educative content online about creating a simple raytracer, so for the first two points I let them teach you. Take a look at glm.
If don't figure out what is wrong with calcNormalVector(...) please post its code :)
Did that works ?
I assume that your ray and normal vector are already normalized.
Vector Math::reflect(const Vector &ray, const Vector &normal) const
{
return ray - 2.0 * Math::dot(normal, ray) * normal;
}
Moreover, I can't understand with your provided code this call :
this->getClosestObj(reflected, Camera(obj->getPosition()));
That should be something like that no ?
this->getClosestObj(reflected, Camera(impact));

volume rendering raycasting artifacts

I am trying to implement a simple raycasting volume rendering in WebGL.
It is kind of working, but there are some artifacts when you rotate the volume around (i.e. the head appears deformed).
Live demo:
http://fnndsc.github.io/vjs/#shaders_raycasting_adibrain
GLSL Code used for debugging:
https://github.com/FNNDSC/vjs/blob/master/src/shaders/shaders.raycasting.secondPass.frag
Simplified version of the code:
for(int rayStep = 0; rayStep < maxSteps; rayStep++){
// map world coordinates to data coordinates
vec4 dataCoordinatesRaw = uWorldToData * currentPosition;
ivec3 dataCoordinates = ivec3(int(floor(dataCoordinatesRaw.x)), int(floor(dataCoordinatesRaw.y)), int(floor(dataCoordinatesRaw.z)));
float intensity = getIntensity(dataCoordinates);
// we have the intensity now
vec3 colorSample = vec3(intensity);
float alphaSample = intensity;
accumulatedColor += (1.0 - accumulatedAlpha) * colorSample * alphaSample;
accumulatedAlpha += alphaSample;
//Advance the ray.
currentPosition += deltaDirection;
accumulatedLength += deltaDirectionLength;
if(accumulatedLength >= rayLength || accumulatedAlpha >= 1.0 ) break;
}
I do not understand what could explain those artifacts.
Could it be because I do not use gradients to modulate opacity/color?
Any hint would be very welcome.
The backface coordinates were not computed properly during the first pass of the raycasting. The range of the "normalized" coodinates was not [0, 1]. It was [-.5, 1.5], therefore creating the visualization artifact as all values outside of [0, 1] range were clamped out.

OpenGL ray casting (picking): account for object's transform

For picking objects, I've implemented a ray casting algorithm similar to what's described here. After converting the mouse click to a ray (with origin and direction) the next task is to intersect this ray with all triangles in the scene to determine hit points for each mesh.
I have also implemented the triangle intersection test algorithm based on the one described here. My question is, how should we account for the objects' transforms when performing the intersection? Obviously, I don't want to apply the transformation matrix to all vertices and then do the intersection test (too slow).
EDIT:
Here is the UnProject implementation I'm using (I'm using OpenTK by the way). I compared the results, they match what GluUnProject gives me:
private Vector3d UnProject(Vector3d screen)
{
int[] viewport = new int[4];
OpenTK.Graphics.OpenGL.GL.GetInteger(OpenTK.Graphics.OpenGL.GetPName.Viewport, viewport);
Vector4d pos = new Vector4d();
// Map x and y from window coordinates, map to range -1 to 1
pos.X = (screen.X - (float)viewport[0]) / (float)viewport[2] * 2.0f - 1.0f;
pos.Y = 1 - (screen.Y - (float)viewport[1]) / (float)viewport[3] * 2.0f;
pos.Z = screen.Z * 2.0f - 1.0f;
pos.W = 1.0f;
Vector4d pos2 = Vector4d.Transform(pos, Matrix4d.Invert(GetModelViewMatrix() * GetProjectionMatrix()));
Vector3d pos_out = new Vector3d(pos2.X, pos2.Y, pos2.Z);
return pos_out / pos2.W;
}
Then I'm using this function to create a ray (with origin and direction):
private Ray ScreenPointToRay(Point mouseLocation)
{
Vector3d near = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 0));
Vector3d far = UnProject(new Vector3d(mouseLocation.X, mouseLocation.Y, 1));
Vector3d origin = near;
Vector3d direction = (far - near).Normalized();
return new Ray(origin, direction);
}
You can apply the reverse transformation of each object to the ray instead.
I don't know if this is the best/most efficient approach, but I recently implemented something similar like this:
In world space, the origin of the ray is the camera position. In order to get the direction of the ray, I assumed the user had clicked on the near plane of the camera and thus applied the 'reverse transformation' - from screen space to world space - to the screen space position
( mouseClick.x, viewportHeight - mouseClick.y, 0 )
and then subtracted the origin of the ray, i.e. the camera position, from
the now transformed mouse click position.
In my case, there was no object-specific transformation, meaning I was done once I had my ray in world space. However, transforming origin & direction with the inverse model matrix would have been easy enough after that.
You mentioned that you tried to apply the reverse transformation, but that it didn't work - maybe there's a bug in there? I used a GLM - i.e. glm::unProject - for this.

Ray Tracing - Reflection

I'm now working on the ray tracer, reflection part. I have everything working correctly, including creating a sphere with shadow. Now, I'm implementing the reflection part. However, I couldn't get it. My algorithm is below:
traceRay(Ray ray, int counter){
// look through the intersection between ray and list of objects
// find the final index aka the winning index, (if final index == -1, return background color)
// then calculate the intersection point
// perform reflection calculation here
if(counter > 1 && winning object's reflectivity > 1 ){
//get the intersection normal, vector N
//Calculate the reflection ray, R
// let I is the inverse of direction of incoming ray
//Calculate R = 2aN - I (a = N dotProduct I)
// the reflection ray is origin at the point of intersection between incoming ray and sphere with the R direction
Ray reflecRay (intersection_poisition, R);
Color reflection = traceRay(reflecRay, counter + 1);
// multiply by fraction ks
reflection = reflection * ks;
}
// the color of the sphere calculated using phong formula in shadeRay function
Color prefinal = shadeRay();
// return the total color of prefinal + reflection
}
I trying to get the reflection but couldn't get it, can anyone please let me know if my algorithm for traceRay function is correct?
When reflecting a ray, you need to move it along the reflector's normal to avoid intersection with the reflector itself. For example:
const double ERR = 1e-12;
Ray reflecRay (intersection_poisition + normal*ERR, R);