Raytracing incorrect soft shadow sampling - c++

Hello I'm studying on a raytracing algorithm and I'm stuck at monte carlo algorithm. While rendering without area light my render output was correct but when i added area light implementation to the source code for generating soft shadow I've encountered a problem.
Here is the before-after output image.
When I moved blue sphere down the problem is continuing (notice that artifact continues when sphere along the white dotted line).
Note this sphere and arealight is the same z offset. When I bring blue sphere to front of screen, the artifact is gone. I think problem is caused by uniform sampling cone or sampling sphere function but not sure.
Here is function:
template <typename T>
CVector3<T> UConeSample(T u1, T u2, T costhetamax,
const CVector3<T>& x, const CVector3<T>& y, const CVector3<T>& z) {
T costheta = Math::Lerp(u1, costhetamax, T(1));
T sintheta = sqrtf(T(1) - costheta*costheta);
T phi = u2 * T(2) * T(M_PI);
return cosf(phi) * sintheta * x +
sinf(phi) * sintheta * y +
costheta * z;
}
I'm generating random float u1, u2 value from van Der Corput sequence.
This is sphere sampling method
CPoint3<float> CSphere::Sample(const CLightSample& ls, const CPoint3<float>& p, CVector3<float> *n) const {
// translate object to world space
CPoint3<float> pCentre = o2w(CPoint3<float>(0.0f));
CVector3<float> wc = Vector::Normalize(pCentre - p);
CVector3<float> wcx, wcy;
//create local coordinate system from wc for uniform sample cone
Vector::CoordinateSystem(wc, &wcx, &wcy);
//check if inside, epsilon val. this is true?
if (Point::DistSquare(p, pCentre) - radius*radius < 1e-4f)
return Sample(ls, n);
// Else outside evaluate cosinus theta value
float sinthetamax2 = radius * radius / Point::DistSquare(p, pCentre);
float costhetamax = sqrtf(Math::Max(0.0f, 1.0f - sinthetamax2));
// Surface properties
CSurfaceProps dg_sphere;
float thit, ray_epsilon;
CPoint3<float> ps;
//create ray direction from sampled point then send ray to sphere
CRay ray(p, Vector::UConeSample(ls.u1, ls.u2, costhetamax, wcx, wcy, wc), 1e-3f);
// Check intersection against sphere, fill surface properties and calculate hit point
if (!Intersect(ray, &thit, &ray_epsilon, &dg_sphere))
thit = Vector::Dot(pCentre - p, Vector::Normalize(ray.d));
// Evaluate surface normal
ps = ray(thit);
*n = CVector3<float>(Vector::Normalize(ps - pCentre));
//return sample point
return ps;
}
Does anyone have any suggestions? Thanks.

I solved the problem.
The problem is caused by RNG "Random Number Generator" algorithm in light sample class (require well distributed u1, u2: low discrepancy sampling).
Ray tracing needs more complex RNG (one of them "The Mersenne Twister" pseudorandom number generator) and good shuffling algorithm.
I hope it will help. Thanks to everyone who posted comments.

Related

Raytracing Reflection distortion

I've started coding a raytracer, but today I encounter a problem when dealing with reflection.
First, here is an image of the problem:
I only computed the object's reflected color (so no light effect is applied on the reflected object)
The problem is that distortion that I really don't understand.
I looked at the angle between my rayVector and the normalVector and it looks ok, the reflected vector also looks fine.
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyNormal = normal;
Vector copyView = ray;
copyNormal.makeUnit();
copyView.makeUnit();
cosAngle = copyView.scale(copyNormal);
return (-2.0 * cosAngle * normal + ray);
}
So for example when my ray is hitting the bottom of my sphere I have the following values:
cos: 1
ViewVector: [185.869,-2.44308,-26.3504]
NormalVector: [185.869,-2.44308,-26.3504]
ReflectedVector: [-185.869,2.44308,26.3504]
Bellow if the code that handles the reflection:
Color Rt::getReflectedColor(std::shared_ptr<SceneObj> obj, Camera camera,
Vector rayVec, double k, unsigned int pass) {
if (pass > 10)
return obj->getColor();
if (obj->getReflectionIndex() == 0) {
// apply effects
return obj->getColor();
}
Color cuColor(obj->getColor());
Color newColor(0);
Math math;
Vector view;
Vector normal;
Vector reflected;
Position impact;
std::pair<std::shared_ptr<SceneObj>, double> reflectedObj;
normal = math.calcNormalVector(camera.pos, obj, rayVec, k, impact);
view = Vector(impact.x, impact.y, impact.z) -
Vector(camera.pos.x, camera.pos.y, camera.pos.z);
reflected = math.calcReflectedVector(view, normal);
reflectedObj = this->getClosestObj(reflected, Camera(impact));
if (reflectedObj.second <= 0) {
cuColor.mix(0x000000, obj->getReflectionIndex());
return cuColor;
}
newColor = this->getReflectedColor(reflectedObj.first, Camera(impact),
reflected, reflectedObj.second, pass + 1);
// apply effects
cuColor.mix(newColor, obj->getReflectionIndex());
return newColor;
}
To calculate the normal and the reflected Vector:
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyRay = ray;
copyRay.makeUnit();
cosAngle = copyRay.scale(normal);
return (-2.0 * cosAngle * normal + copyRay);
}
Vector Math::calcNormalVector(Position pos, std::shared_ptr<SceneObj> obj,
Vector rayVec, double k, Position& impact) const {
const Position &objPos = obj->getPosition();
Vector normal;
impact.x = pos.x + k * rayVec.x;
impact.y = pos.y + k * rayVec.y;
impact.z = pos.z + k * rayVec.z;
obj->calcNormal(normal, impact);
return normal;
}
[EDIT1]
I have a new image, i removed the plane only to keep the spheres:
As you can see there is blue and yellow on the border of the sphere.
Thanks to neam I colored the sphere applying the following formula:
newColor.r = reflected.x * 127.0 + 127.0;
newColor.g = reflected.y * 127.0 + 127.0;
newColor.b = reflected.z * 127.0 + 127.0;
Bellow is the visual result:
Ask me if you need any information.
Thanks in advance
There are many little things with the example you provided. This may -- or may not -- answer your question, but as I suppose you're doing a raytracer for learning purposes (either at school or in your free time) I'll give you some hints.
you have two classes Vector and Position. It may well seems like it's a good idea, but why not seeing the position as the translation vector from the origin ? This would avoid some code duplication I think (except if you've done something like using Position = Vector;). You may also want to look at some libraries that does all the mathematical things for you (like glm could do). (and this way, you'll avoid some errors like naming your dot function scale())
you create a camera from the position (that is a really strange thing). Reflections doesn't involve any camera. In a typical raytracer, you have one camera {position + direction + fov + ...} and for each pixels of your image/reflections/refractions/..., you cast rays {origin + direction} (thus the name raytracer, which isn't cameratracer). The Camera class is usually tied to the concept of physical camera with things like focal, depth of field, aperture, chromatic aberration, ... whereas the ray is simply... a ray. (could be a ray from the plane where the output image is mapped to the first object, or a ray created from reflection, diffraction, scattering, ...).
and for the final point, I think that your error may comes from the Math::calcNormalVector(...) function. For a sphere at a position P and for an intersection point I, the normal N is: N = normalize(I - P);.
EDIT: seems like your problem comes from the Rt::getClosestObj. Everything else is looking fine
There's ton a websites/blogs/educative content online about creating a simple raytracer, so for the first two points I let them teach you. Take a look at glm.
If don't figure out what is wrong with calcNormalVector(...) please post its code :)
Did that works ?
I assume that your ray and normal vector are already normalized.
Vector Math::reflect(const Vector &ray, const Vector &normal) const
{
return ray - 2.0 * Math::dot(normal, ray) * normal;
}
Moreover, I can't understand with your provided code this call :
this->getClosestObj(reflected, Camera(obj->getPosition()));
That should be something like that no ?
this->getClosestObj(reflected, Camera(impact));

Implementing soft shadows in a ray tracer

what I am trying to do is implementing soft shadows in my simple ray tracer, developed in C++. The idea behind this, if I understood correctly, is to shoot multiple rays towards the light, instead of a single ray towards the center of the light, and average the results. The rays are therefore shot in different positions of the light. So far I am using random points, which I don't know if it is correct or if I should use points regularly distributed on the light surface. Assuming that I am doing right, I choose a random point on the light, which in my framework is implemented as a sphere. This is given by:
Vec3<T> randomPoint() const
{
T x;
T y;
T z;
// random vector in unit sphere
std::random_device rd; //used for the new <random> library
std::mt19937 gen(rd());
std::uniform_real_distribution<> dis(-1, 1);
do
{
x = dis(gen);
y = dis(gen);
z = dis(gen);
} while (pow(x, 2) + pow(y, 2) + pow(z, 2) > 1); // simple rejection sampling
return center + Vec3<T>(x, y, z) * radius;
}
After this, I don't know how exactly I should move since my rendering equation (in my simple ray tracer) is defined as follows:
Vec3<float> surfaceColor = 0
for(int i < 0; i < lightsInTheScene.size(); i++){
surfaceColor += obj->surfaceColor * transmission *
std::max(float(0), nHit.dot(lightDirection)) * g_lights[i]->emissionColor;
}
return surfaceColor + obj->emissionColor;
where transmission is a simple float which is set to 0 in case the ray that goes from my hitPoint to the lightCenter used to find an object in the middle.
So, what I tried to do was:
creating multiple rays towards random points on the light
counting how many of them hit an object on their path and memorize this number
for simplicity: Let's imagine in my case that I shoot 3 shadow rays from my point towards random points on the light. Only 2 of 3 rays reach the light. Therefore the final color of my pixel will be = color * shadowFactor where shadowFactor = 2/3. In my equation then I delete the transmission factor (which is now wrong) and I use the shadowFactor instead. The problem is that in my equation I have:
std::max(float(0), nHit.dot(lightDirection))
Which I don't know how to change since I don't have anymore a lightDirection which points towards the center of the light. Can you please help me understanding what should I do it and what's wrong so far? Thanks in advance!
You should evaluate the entire BRDF for the picked light samples. Then, you will also have the light direction (vector from object position to picked light sample). And you can average these results. Note that most area lights have a non-isotropic light emission characteristic (i.e. the amount of light emitted from a point varies by the outgoing direction).
Averaging the visibility does not produce correct results (although they are usually visually plausible).

Ray Tracing - Reflection

I'm now working on the ray tracer, reflection part. I have everything working correctly, including creating a sphere with shadow. Now, I'm implementing the reflection part. However, I couldn't get it. My algorithm is below:
traceRay(Ray ray, int counter){
// look through the intersection between ray and list of objects
// find the final index aka the winning index, (if final index == -1, return background color)
// then calculate the intersection point
// perform reflection calculation here
if(counter > 1 && winning object's reflectivity > 1 ){
//get the intersection normal, vector N
//Calculate the reflection ray, R
// let I is the inverse of direction of incoming ray
//Calculate R = 2aN - I (a = N dotProduct I)
// the reflection ray is origin at the point of intersection between incoming ray and sphere with the R direction
Ray reflecRay (intersection_poisition, R);
Color reflection = traceRay(reflecRay, counter + 1);
// multiply by fraction ks
reflection = reflection * ks;
}
// the color of the sphere calculated using phong formula in shadeRay function
Color prefinal = shadeRay();
// return the total color of prefinal + reflection
}
I trying to get the reflection but couldn't get it, can anyone please let me know if my algorithm for traceRay function is correct?
When reflecting a ray, you need to move it along the reflector's normal to avoid intersection with the reflector itself. For example:
const double ERR = 1e-12;
Ray reflecRay (intersection_poisition + normal*ERR, R);

Precision issue - viewpoint far from origin - OpenGL C++

I have a camera class for controlling the camera, with the main function:
void PNDCAMERA::renderMatrix()
{
float dttime=getElapsedSeconds();
GetCursorPos(&cmc.p_cursorPos);
ScreenToClient(hWnd, &cmc.p_cursorPos);
double d_horangle=((double)cmc.p_cursorPos.x-(double)cmc.p_origin.x)/(double)screenWidth*PI;
double d_verangle=((double)cmc.p_cursorPos.y-(double)cmc.p_origin.y)/(double)screenHeight*PI;
cmc.horizontalAngle=d_horangle+cmc.d_horangle_prev;
cmc.verticalAngle=d_verangle+cmc.d_verangle_prev;
if(cmc.verticalAngle>PI/2) cmc.verticalAngle=PI/2;
if(cmc.verticalAngle<-PI/2) cmc.verticalAngle=-PI/2;
changevAngle(cmc.verticalAngle);
changehAngle(cmc.horizontalAngle);
rightVector=glm::vec3(sin(horizontalAngle - PI/2.0f),0,cos(horizontalAngle - PI/2.0f));
directionVector=glm::vec3(cos(verticalAngle) * sin(horizontalAngle), sin(verticalAngle), cos(verticalAngle) * cos(horizontalAngle));
upVector=glm::vec3(glm::cross(rightVector,directionVector));
glm::normalize(upVector);
glm::normalize(directionVector);
glm::normalize(rightVector);
if(moveForw==true)
{
cameraPosition=cameraPosition+directionVector*(float)C_SPEED*dttime;
}
if(moveBack==true)
{
cameraPosition=cameraPosition-directionVector*(float)C_SPEED*dttime;
}
if(moveRight==true)
{
cameraPosition=cameraPosition+rightVector*(float)C_SPEED*dttime;
}
if(moveLeft==true)
{
cameraPosition=cameraPosition-rightVector*(float)C_SPEED*dttime;
}
glViewport(0,0,screenWidth,screenHeight);
glScissor(0,0,screenWidth,screenHeight);
projection_matrix=glm::perspective(60.0f, float(screenWidth) / float(screenHeight), 1.0f, 40000.0f);
view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector,
upVector);
gShader->bindShader();
gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
gShader->sendUniform("camera_position",cameraPosition.x,cameraPosition.y,cameraPosition.z);
gShader->sendUniform("screen_size",(GLfloat)screenWidth,(GLfloat)screenHeight);
};
It runs smooth, I can control the angle with my mouse in X and Y directions, but not around the Z axis (the Y is the "up" in world space).
In my rendering method I render the terrain grid with one VAO call. The grid itself is a quad as the center (highes lod), and the others are L shaped grids scaled by powers of 2. It is always repositioned before the camera, scaled into world space, and displaced by a heightmap.
rcampos.x = round((camera_position.x)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
rcampos.y = 0;
rcampos.z = round((camera_position.z)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
vPos = vec3(uv.x,0,uv.y)*pow(2,LOD)*gridscale + rcampos;
vPos.y = texture(hmap,vPos.xz/horizontal_scale).r*vertical_scale;
The problem:
The camera starts at the origin, at (0,0,0). When I move it far away from that point, it causes the rotation along the X axis discontinuous. It feels like the mouse cursor was aligned with a grid in screen space, and only the position at grid points were recorded as the cursor movement.
I've also recorded the camera position when it gets pretty noticeable, it's about at 1,000,000 from the origin in X or Z directions. I've noticed that this 'lag' increases linearly with distance, (from the origin).
There is also a little Z-fighting at this point(or similar effect), even if I use a single plane with no displacement, and no planes can overlap. (I use tessellation shaders and render patches.) Black spots appear on the patches. May be caused by fog:
float fc = (view_matrix*vec4(Pos,1)).z/(view_matrix*vec4(Pos,1)).w;
float fResult = exp(-pow(0.00005f*fc, 2.0));
fResult = clamp(fResult, 0.0, 1.0);
gl_FragColor = vec4(mix(vec4(0.0,0.0,0.0,0),vec4(n,1),fResult));
Another strange behavior is the little rotation by the Z axis, this increases with distance too, but I don't use this kind of rotation.
Variable formats:
The vertices are unsigned short format, the indexes are in unsigned int format.
The cmc struct is the camera/cursor struct with double variables.
PI and C_SPEED are #define constants.
Additional information:
The grid is created with the above mentioned ushort array, with the spacing of 1. In the shader I scale it with a constant, then use tessellation to achieve the best performance and the largest view distance.
The final position of a vertex is calculated in the tessellation evaluation shader.
mat4 MVP = projection_matrix*view_matrix*model_matrix;
As you could see I send my matrices to the shader with the glm library.
+Q:
How could the length of a float (or any other format) cause this kind of 'precision loss', or whatever causes the problem. The view_matrix could be a cause of this, but I still cannot output it on the screen at runtime.
PS: I don't know If this helps, but the view matrix at about the 'lag start location' is
-0.49662 -0.49662 0.863129 0
0.00514956 0.994097 0.108373 0
-0.867953 0.0582648 -0.493217 0
1.62681e+006 16383.3 -290126 1
EDIT
Comparing the camera position and view matrix:
view matrix = 0.967928 0.967928 0.248814 0
-0.00387854 0.988207 0.153079 0
-0.251198 -0.149134 0.956378 0
-2.88212e+006 89517.1 -694945 1
position = 2.9657e+006, 6741.52, -46002
It's a long post so I might not answer everything.
I think it is most likely precision issue. Lets start with the camera rotation problem. I think the main problem is here
view_matrix = glm::lookAt(
cameraPosition,
cameraPosition+directionVector,
upVector);
As you said, position is quite a big number like 2.9657e+006 - and look what glm does in glm::lookAt:
GLM_FUNC_QUALIFIER detail::tmat4x4<T> lookAt
(
detail::tvec3<T> const & eye,
detail::tvec3<T> const & center,
detail::tvec3<T> const & up
)
{
detail::tvec3<T> f = normalize(center - eye);
detail::tvec3<T> u = normalize(up);
detail::tvec3<T> s = normalize(cross(f, u));
u = cross(s, f);
In your case, eye and center are these big (very similar) numbers and then glm subtracts them to compute f. This is bad, because if you subtract two almost equal floats, the most significant digits are set to zero, which leaves you with the insignificant (most erroneous) digits. And you use this for further computations, which only emphasizes the error. Check this link for some details.
The z-fighting is similar issue. Z-buffer is not linear, it has the best resolution near the camera because of the perspective divide. The z-buffer range is set according to your near and far clipping plane values. You always want to have the smallest possible ration between far and near values (generally far/near should not be greater than 30000). There is a very good explanation of this on the openGL wiki, I suggest you read it :)
Back to the camera issue - first, I would consider if you really need such a huge scene. I don't think so, but if yes, you could try computing your view matrix differently, compute rotation and translation separately, which could help your case. The way I usually handle camera:
glm::vec3 cameraPos;
glm::vec3 cameraRot;
glm::vec3 cameraPosLag;
glm::vec3 cameraRotLag;
int ox, oy;
const float inertia = 0.08f; //mouse inertia
const float rotateSpeed = 0.2f; //mouse rotate speed (sensitivity)
const float walkSpeed = 0.25f; //walking speed (wasd)
void updateCameraViewMatrix() {
//camera inertia
cameraPosLag += (cameraPos - cameraPosLag) * inertia;
cameraRotLag += (cameraRot - cameraRotLag) * inertia;
// view transform
g_CameraViewMatrix = glm::rotate(glm::mat4(1.0f), cameraRotLag[0], glm::vec3(1.0, 0.0, 0.0));
g_CameraViewMatrix = glm::rotate(g_CameraViewMatrix, cameraRotLag[1], glm::vec3(0.0, 1.0, 0.0));
g_CameraViewMatrix = glm::translate(g_CameraViewMatrix, cameraPosLag);
}
void mousePositionChanged(int x, int y) {
float dx, dy;
dx = (float) (x - ox);
dy = (float) (y - oy);
ox = x;
oy = y;
if (mouseRotationEnabled) {
cameraRot[0] += dy * rotateSpeed;
cameraRot[1] += dx * rotateSpeed;
}
}
void keyboardAction(int key, int action) {
switch (key) {
case 'S':// backwards
cameraPos[0] -= g_CameraViewMatrix[0][2] * walkSpeed;
cameraPos[1] -= g_CameraViewMatrix[1][2] * walkSpeed;
cameraPos[2] -= g_CameraViewMatrix[2][2] * walkSpeed;
break;
...
}
}
This way, the position would not affect your rotation. I should add that I adapted this code from NVIDIA CUDA samples v5.0 (Smoke Particles), I really like it :)
Hope at least some of this helps.

Ray tracing vectors

So I decided to write a ray tracer the other day, but I got stuck because I forgot all my vector math.
I've got a point behind the screen (the eye/camera, 400,300,-1000) and then a point on the screen (a plane, from 0,0,0 to 800,600,0), which I'm getting just by using the x and y values of the current pixel I'm looking for (using SFML for rendering, so it's something like 267,409,0)
Problem is, I have no idea how to cast the ray correctly. I'm using this for testing sphere intersection(C++):
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{ //operator * between 2 vec3s is a dot product
Vec3 dist = ray.start - sphere.pos; //both vec3s
float B = -1 * (ray.dir * dist);
float D = B*B - dist * dist + sphere.radius * sphere.radius; //radius is float
if(D < 0.0f)
return false;
float t0 = B - sqrtf(D);
float t1 = B + sqrtf(D);
bool ret = false;
if((t0 > 0.1f) && (t0 < t))
{
t = t0;
ret = true;
}
if((t1 > 0.1f) && (t1 < t))
{
t = t1;
ret = true;
}
return ret;
}
So I get that the start of the ray would be the eye position, but what is the direction?
Or, failing that, is there a better way of doing this? I've heard of some people using the ray start as (x, y, -1000) and the direction as (0,0,1) but I don't know how that would work.
On a side note, how would you do transformations? I'm assuming that to change the camera angle you just adjust the x and y of the camera (or the screen if you need a drastic change)
The parameter "ray" in the function,
bool SphereCheck(Ray& ray, Sphere& sphere, float& t)
{
...
}
should already contain the direction information and with this direction you need to check if the ray intersects the sphere or not. (The incoming "ray" parameter is the vector between the camera point and the pixel the ray is sent.)
Therefore the local "dist" variable seems obsolete.
One thing I can see is that when you create your rays you are not using the center of each pixel in the screen as the point for building the direction vector. You do not want to use just the (x, y) coordinates on the grid for building those vectors.
I've taken a look at your sample code and the calculation is indeed incorrect. This is what you want.
http://www.csee.umbc.edu/~olano/435f02/ray-sphere.html (I took this course in college, this guy knows his stuff)
Essentially it means you have this ray, which has an origin and direction. You have a sphere with a point and a radius. You use the ray equation and plug it into the sphere equation and solve for t. That t is the distance between the ray origin and the intersection point on the spheres surface. I do not think your code does this.
So I get that the start of the ray would be the eye position, but what is the direction?
You have camera defined by vectors front, up, and right (perpendicular to each other and normalized) and "position" (eye position).
You also have width and height of viewport (pixels), vertical field of view (vfov) and horizontal field of view (hfov) in degrees or radians.
There are also 2D x and y coordinates of pixel. X axis (2D) points to the right, Y axis (2D) points down.
For a flat screen ray can be calculated like this:
startVector = eyePos;
endVector = startVector
+ front
+ right * tan(hfov/2) * (((x + 0.5)/width)*2.0 - 1.0)
+ up * tan(vfov/2) * (1.0 - ((y + 0.5f)/height)*2.0);
rayStart = startVector;
rayDir = normalize(endVector - startVector);
That assumes that screen plane is flat. For extreme field of view angles (fov >= 180 degreess) you might want to make screen plane spherical, and use different formulas.
how would you do transformations
Matrices.