Ray Tracing - Reflection - c++

I'm now working on the ray tracer, reflection part. I have everything working correctly, including creating a sphere with shadow. Now, I'm implementing the reflection part. However, I couldn't get it. My algorithm is below:
traceRay(Ray ray, int counter){
// look through the intersection between ray and list of objects
// find the final index aka the winning index, (if final index == -1, return background color)
// then calculate the intersection point
// perform reflection calculation here
if(counter > 1 && winning object's reflectivity > 1 ){
//get the intersection normal, vector N
//Calculate the reflection ray, R
// let I is the inverse of direction of incoming ray
//Calculate R = 2aN - I (a = N dotProduct I)
// the reflection ray is origin at the point of intersection between incoming ray and sphere with the R direction
Ray reflecRay (intersection_poisition, R);
Color reflection = traceRay(reflecRay, counter + 1);
// multiply by fraction ks
reflection = reflection * ks;
}
// the color of the sphere calculated using phong formula in shadeRay function
Color prefinal = shadeRay();
// return the total color of prefinal + reflection
}
I trying to get the reflection but couldn't get it, can anyone please let me know if my algorithm for traceRay function is correct?

When reflecting a ray, you need to move it along the reflector's normal to avoid intersection with the reflector itself. For example:
const double ERR = 1e-12;
Ray reflecRay (intersection_poisition + normal*ERR, R);

Related

Raytracing incorrect soft shadow sampling

Hello I'm studying on a raytracing algorithm and I'm stuck at monte carlo algorithm. While rendering without area light my render output was correct but when i added area light implementation to the source code for generating soft shadow I've encountered a problem.
Here is the before-after output image.
When I moved blue sphere down the problem is continuing (notice that artifact continues when sphere along the white dotted line).
Note this sphere and arealight is the same z offset. When I bring blue sphere to front of screen, the artifact is gone. I think problem is caused by uniform sampling cone or sampling sphere function but not sure.
Here is function:
template <typename T>
CVector3<T> UConeSample(T u1, T u2, T costhetamax,
const CVector3<T>& x, const CVector3<T>& y, const CVector3<T>& z) {
T costheta = Math::Lerp(u1, costhetamax, T(1));
T sintheta = sqrtf(T(1) - costheta*costheta);
T phi = u2 * T(2) * T(M_PI);
return cosf(phi) * sintheta * x +
sinf(phi) * sintheta * y +
costheta * z;
}
I'm generating random float u1, u2 value from van Der Corput sequence.
This is sphere sampling method
CPoint3<float> CSphere::Sample(const CLightSample& ls, const CPoint3<float>& p, CVector3<float> *n) const {
// translate object to world space
CPoint3<float> pCentre = o2w(CPoint3<float>(0.0f));
CVector3<float> wc = Vector::Normalize(pCentre - p);
CVector3<float> wcx, wcy;
//create local coordinate system from wc for uniform sample cone
Vector::CoordinateSystem(wc, &wcx, &wcy);
//check if inside, epsilon val. this is true?
if (Point::DistSquare(p, pCentre) - radius*radius < 1e-4f)
return Sample(ls, n);
// Else outside evaluate cosinus theta value
float sinthetamax2 = radius * radius / Point::DistSquare(p, pCentre);
float costhetamax = sqrtf(Math::Max(0.0f, 1.0f - sinthetamax2));
// Surface properties
CSurfaceProps dg_sphere;
float thit, ray_epsilon;
CPoint3<float> ps;
//create ray direction from sampled point then send ray to sphere
CRay ray(p, Vector::UConeSample(ls.u1, ls.u2, costhetamax, wcx, wcy, wc), 1e-3f);
// Check intersection against sphere, fill surface properties and calculate hit point
if (!Intersect(ray, &thit, &ray_epsilon, &dg_sphere))
thit = Vector::Dot(pCentre - p, Vector::Normalize(ray.d));
// Evaluate surface normal
ps = ray(thit);
*n = CVector3<float>(Vector::Normalize(ps - pCentre));
//return sample point
return ps;
}
Does anyone have any suggestions? Thanks.
I solved the problem.
The problem is caused by RNG "Random Number Generator" algorithm in light sample class (require well distributed u1, u2: low discrepancy sampling).
Ray tracing needs more complex RNG (one of them "The Mersenne Twister" pseudorandom number generator) and good shuffling algorithm.
I hope it will help. Thanks to everyone who posted comments.

Implementing soft shadows in a ray tracer

what I am trying to do is implementing soft shadows in my simple ray tracer, developed in C++. The idea behind this, if I understood correctly, is to shoot multiple rays towards the light, instead of a single ray towards the center of the light, and average the results. The rays are therefore shot in different positions of the light. So far I am using random points, which I don't know if it is correct or if I should use points regularly distributed on the light surface. Assuming that I am doing right, I choose a random point on the light, which in my framework is implemented as a sphere. This is given by:
Vec3<T> randomPoint() const
{
T x;
T y;
T z;
// random vector in unit sphere
std::random_device rd; //used for the new <random> library
std::mt19937 gen(rd());
std::uniform_real_distribution<> dis(-1, 1);
do
{
x = dis(gen);
y = dis(gen);
z = dis(gen);
} while (pow(x, 2) + pow(y, 2) + pow(z, 2) > 1); // simple rejection sampling
return center + Vec3<T>(x, y, z) * radius;
}
After this, I don't know how exactly I should move since my rendering equation (in my simple ray tracer) is defined as follows:
Vec3<float> surfaceColor = 0
for(int i < 0; i < lightsInTheScene.size(); i++){
surfaceColor += obj->surfaceColor * transmission *
std::max(float(0), nHit.dot(lightDirection)) * g_lights[i]->emissionColor;
}
return surfaceColor + obj->emissionColor;
where transmission is a simple float which is set to 0 in case the ray that goes from my hitPoint to the lightCenter used to find an object in the middle.
So, what I tried to do was:
creating multiple rays towards random points on the light
counting how many of them hit an object on their path and memorize this number
for simplicity: Let's imagine in my case that I shoot 3 shadow rays from my point towards random points on the light. Only 2 of 3 rays reach the light. Therefore the final color of my pixel will be = color * shadowFactor where shadowFactor = 2/3. In my equation then I delete the transmission factor (which is now wrong) and I use the shadowFactor instead. The problem is that in my equation I have:
std::max(float(0), nHit.dot(lightDirection))
Which I don't know how to change since I don't have anymore a lightDirection which points towards the center of the light. Can you please help me understanding what should I do it and what's wrong so far? Thanks in advance!
You should evaluate the entire BRDF for the picked light samples. Then, you will also have the light direction (vector from object position to picked light sample). And you can average these results. Note that most area lights have a non-isotropic light emission characteristic (i.e. the amount of light emitted from a point varies by the outgoing direction).
Averaging the visibility does not produce correct results (although they are usually visually plausible).

Raytracer Refraction Bug

I'm writing a raytracer in C++, and I've been having some issue with refractions. I'm rendering a sphere and a ground plane, and the sphere should refract. However, it looks more like a sphere within a sphere: the "outer" sphere looks to be shaded properly, but not refracting, while the "inner" sphere looks like it's being self-shadowed. Here's a link to what it looks like: http://imgur.com/QVGkeBT.
Here's the relevant code.
//inside main raytrace function
if(refraction > 0.0f){ //the surface is refractive
//calculate refraction vector
Ray refract(intersection,
objList[bestObj]->refractedRay(
ray.dir,intersection,&cos_theta,&R0));
//recurse
refrColor = raytrace(refract);
}
else{ //no refraction
refrColor = background;
}
//refractedRay(vec3,vec3,float*,float*)
//...initialize variables, do geometric transforms
//into air out of obj
if(dot(ray,normal) < 0){
n1 = ior;
n2 = 1.0f;
*cos = dot(ray,-normal);
}
//into obj out of air
else{
n1 = 1.0f;
n2 = ior;
*cos = dot(ray,normal);
normal = -normal;
}
//check value under sqrt
float n = n1/n2;
float disc = 1-(pow(n,2)*(1-pow(*cos,2)));
if(disc < 0){ //total internal reflection
return ray - 2*-(*cos)*normal; //reflection vector
}
return (n*ray)+(((n*(*cos))-sqrt(disc))*normal);
The sphere used to look worse, then I remembered to normalize my vectors and it looks like this. Previously, it looked like only the inner sphere all throughout. Inside the main raytrace function, I do the refraction the same way as reflection, just using the refracted ray instead. I've also tried modifying the incoming point of intersection and ray with epsilon to check for self-refracting as you can get in shadowing.
Any help would be appreciated :)
I haven't checked your refraction formulae, but this looks wrong:
//into air out of obj
if(dot(ray,normal) < 0){
n1 = ior;
n2 = 1.0f;
*cos = dot(ray,-normal);
}
If the dot product of the incident ray and the normal is less than zero, and assuming the normal points outwards of the object (which it probably should) then this case corresponds to air -> inside, so your refractive indices should be swapped. As it is now you are rendering a sphere with ior 1 / ior and since that refractive index is less than 1 you are observing total internal reflection on the edges.
Here is one of my implementations which you can take a look at to see if anything is missing (it has more features but you should be able to identify the parts you are interested in and check it your computations match). To me it looks all right so I think fixing the refractive indices should do it.
The nondeterministic pattern in the center of the sphere, though, definitely looks like self-intersection. Make sure that in the case of reflection, you push the reflected ray outside the intersected surface slightly, and in the case of refraction, push the refracted ray inside slightly, to avoid self-intersection.

Ray tracing vectors

So I decided to write a ray tracer the other day, but I got stuck because I forgot all my vector math.
I've got a point behind the screen (the eye/camera, 400,300,-1000) and then a point on the screen (a plane, from 0,0,0 to 800,600,0), which I'm getting just by using the x and y values of the current pixel I'm looking for (using SFML for rendering, so it's something like 267,409,0)
Problem is, I have no idea how to cast the ray correctly. I'm using this for testing sphere intersection(C++):
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{ //operator * between 2 vec3s is a dot product
Vec3 dist = ray.start - sphere.pos; //both vec3s
float B = -1 * (ray.dir * dist);
float D = B*B - dist * dist + sphere.radius * sphere.radius; //radius is float
if(D < 0.0f)
return false;
float t0 = B - sqrtf(D);
float t1 = B + sqrtf(D);
bool ret = false;
if((t0 > 0.1f) && (t0 < t))
{
t = t0;
ret = true;
}
if((t1 > 0.1f) && (t1 < t))
{
t = t1;
ret = true;
}
return ret;
}
So I get that the start of the ray would be the eye position, but what is the direction?
Or, failing that, is there a better way of doing this? I've heard of some people using the ray start as (x, y, -1000) and the direction as (0,0,1) but I don't know how that would work.
On a side note, how would you do transformations? I'm assuming that to change the camera angle you just adjust the x and y of the camera (or the screen if you need a drastic change)
The parameter "ray" in the function,
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{
...
}
should already contain the direction information and with this direction you need to check if the ray intersects the sphere or not. (The incoming "ray" parameter is the vector between the camera point and the pixel the ray is sent.)
Therefore the local "dist" variable seems obsolete.
One thing I can see is that when you create your rays you are not using the center of each pixel in the screen as the point for building the direction vector. You do not want to use just the (x, y) coordinates on the grid for building those vectors.
I've taken a look at your sample code and the calculation is indeed incorrect. This is what you want.
http://www.csee.umbc.edu/~olano/435f02/ray-sphere.html (I took this course in college, this guy knows his stuff)
Essentially it means you have this ray, which has an origin and direction. You have a sphere with a point and a radius. You use the ray equation and plug it into the sphere equation and solve for t. That t is the distance between the ray origin and the intersection point on the spheres surface. I do not think your code does this.
So I get that the start of the ray would be the eye position, but what is the direction?
You have camera defined by vectors front, up, and right (perpendicular to each other and normalized) and "position" (eye position).
You also have width and height of viewport (pixels), vertical field of view (vfov) and horizontal field of view (hfov) in degrees or radians.
There are also 2D x and y coordinates of pixel. X axis (2D) points to the right, Y axis (2D) points down.
For a flat screen ray can be calculated like this:
startVector = eyePos;
endVector = startVector
+ front
+ right * tan(hfov/2) * (((x + 0.5)/width)*2.0 - 1.0)
+ up * tan(vfov/2) * (1.0 - ((y + 0.5f)/height)*2.0);
rayStart = startVector;
rayDir = normalize(endVector - startVector);
That assumes that screen plane is flat. For extreme field of view angles (fov >= 180 degreess) you might want to make screen plane spherical, and use different formulas.
how would you do transformations
Matrices.

A few questions about ray tracing with opengl

I need to do a limited form of ray tracing. I do not need reflections. I only need to change the color of a pixel, depending on how it passes by an object, and refraction. I also only need to test for intersections between the ray and spheres and disks, nothing else.
This is the main function in my shader:
void main(void)
{
Ray ray;
ray.origin=vec3(0.5,0.5,.75);
ray.direction=vec3(gl_FragCoord.x/width,gl_FragCoord.y/height,-gl_FragCoord.z)-ray.origin;
ray.direction=normalize(ray.direction);
gl_FragColor=trace(ray);
}
My first question is regarding the origin of the ray. How do I get its location? Right now, I just fiddle around until it looks right, but if I change the width or height of the screen I have to play around until it looks right.
My second question is about the intersection between a ray and a disk. I do this by first checking to see if the ray intersects a plane and then if the intersection point is within the radius of the disk.
My code looks like this
float intersectPlane(Ray ray,vec3 point,vec3 normal)
{
return dot(point-ray.origin,normal)/dot(ray.direction,normal);
}
...
det=intersectPlane(ray,bodies[count].position,vec3(0,0,1));
if(det>0)
{
if(distance(det*ray.direction,bodies[count].position)<=bodies[count].radius)
{
return vec4(1.0,0.0,0.0,1.0);
}
}
The problem is that if bodies[count].radius is less than or equal to the z-position of the ray's origin then nothing shows up. So
if(det>0)
{
if(distance(det*ray.direction,bodies[count].position)<=.76)
{
return vec4(1.0,0.0,0.0,1.0);
}
}
results in visible disks, while using the actual radius results in nothing.
As to your second question: don't use a distance, use a squared distance. It's faster processing, and I suspect it may solve your problem.
Origin of the ray really depends on you however I recommend you to specify the origin point such that the pixel positions are approximately equidistant from the origin and the objects.
Be careful about the direction of the ray meaning that the objects you are trying to see must be in front of the camera. (The rays that are sent must hit the objects.)
The intersection point of a ray and a plane is calculated as follows:
dist = dot( plane_origin - ray.origin, plane_NV ) / dot( ray.direction, plane_NV );
plane_isect = ray.origin + ray.direction * dist;
Your function intersectPlane calculates the distance from the origin of the ray to the intersection point at the plane correctly, but you do not calculate the intersection point before you compare it to the center of the disks.
To test if the intersection point is within the radius you have to do the following:
vec3 plane_isect = ray.origin + det * ray.direction;
if ( distance( plane_isect, bodies[count].position ) <= bodies[count].radius )
Adapt your code like this:
det = intersectPlane( ray, bodies[count].position, vec3(0,0,1) );
if ( det>0 )
{
vec3 plane_isect = ray.origin + det * ray.direction;
if ( distance( plane_isect, bodies[count].position ) <= bodies[count].radius )
{
return vec4(1.0,0.0,0.0,1.0);
}
}