I am currently working on a raytracer just for fun and I have trouble with the refraction handling.
The code source of the whole raytracer can be found on Github EDIT: The code migrated to Gitlab.
Here is an image of the render:
The right sphere is set to have a refraction indice of 1.5 (glass).
On top of the refraction, I want to handle a "transparency" coefficient which is defined as such :
0 --> Object is 100% opaque
1 --> Object is 100% transparent (no trace of the original object's color)
This sphere has a transparency of 1.
Here is the code handling the refraction part. It can be found on github here.
Color handleTransparency(const Scene& scene,
const Ray& ray,
const IntersectionData& data,
uint8 depth)
{
Ray refracted(RayType::Transparency, data.point, ray.getDirection());
Float_t eta = data.material->getRefraction();
if (eta != 1 && eta > Globals::Epsilon)
refracted.setDirection(Tools::Refract(ray.getDirection(), data.normal, eta));
refracted.setOrigin(data.point + Globals::Epsilon * refracted.getDirection());
return inter(scene, refracted, depth + 1);
}
// http://graphics.stanford.edu/courses/cs148-10-summer/docs/2006--degreve--reflection_refraction.pdf
Float_t getFresnelReflectance(const IntersectionData& data, const Ray& ray)
{
Float_t n = data.material->getRefraction();
Float_t cosI = -Tools::DotProduct(ray.getDirection(), data.normal);
Float_t sin2T = n * n * (Float_t(1.0) - cosI * cosI);
if (sin2T > 1.0)
return 1.0;
using std::sqrt;
Float_t cosT = sqrt(1.0 - sin2T);
Float_t rPer = (n * cosI - cosT) / (n * cosI + cosT);
Float_t rPar = (cosI - n * cosT) / (cosI + n * cosT);
return (rPer * rPer + rPar * rPar) / Float_t(2.0);
}
Color handleReflectionAndRefraction(const Scene& scene,
const Ray& ray,
const IntersectionData& data,
uint8 depth)
{
bool hasReflexion = data.material->getReflexion() > Globals::Epsilon;
bool hasTransparency = data.material->getTransparency() > Globals::Epsilon;
if (!(hasReflexion || hasTransparency) || depth >= MAX_DEPTH)
return 0;
Float_t reflectance = data.material->getReflexion();
Float_t transmittance = data.material->getTransparency();
Color reflexion;
Color transparency;
if (hasReflexion && hasTransparency)
{
reflectance = getFresnelReflectance(data, ray);
transmittance = 1.0 - reflectance;
}
if (hasReflexion)
reflexion = handleReflection(scene, ray, data, depth) * reflectance;
if (hasTransparency)
transparency = handleTransparency(scene, ray, data, depth) * transmittance;
return reflexion + transparency;
}
Tools::Refract is simply calling glm::refract internally. (So that I can change easily if I want)
I don't handle notions of n1 and n2: n2 is considered to always be 1 for air.
Am I mising something obvious ?
EDIT
After adding a way to know if a ray is inside an object (and negating the normal if so) I have this :
While looking around to find help, I stumbled upon this post but I don't think the answer answers anything. By reading it, I don't understand what I'm supposed to do at all.
EDIT 2
I've tried a lot of things and I am currently at this point :
It's better but I'm still not sure if it's right. I'm using this image as an inspiration :
But this one is using two indexes of refraction (To be closer to reality) while I want to simplify and always consider air as the second (in or out) material.
What I essentially changed in my code is here :
inline Vec_t Refract(Vec_t v, const IntersectionData& data, Float_t eta)
{
Float_t n = eta;
if (data.isInside)
n = 1.0 / n;
double cosI = Tools::DotProduct(v, data.normal);
return v * n - data.normal * (-cosI + n * cosI);
}
Here is another view of the same spheres :
EDIT: I've figured that the previous version of this was not entirely correct so I edit the answer.
After reading all the comments, the new versions of the question and doing some experimentation myself I produced the following version of refract routine:
float3 refract(float3 i, float3 n, float eta)
{
eta = 2.0f - eta;
float cosi = dot(n, i);
float3 o = (i * eta - n * (-cosi + eta * cosi));
return o;
}
This time calling it does not require any additional operations:
float3 refr = refract(rayDirection, normal, refrIdx);
The only thing I am still not sure is the inverting of the refractive index when doing the inside ray intersection. In my test the produced image haven't differ much no matter I inverted the index or not.
Below some images with different indices:
For more images see the link, because the site do not allow me to put more of them here.
I am answering this as a physicist rather than a programmer as haven't had time to read all the code so won't be giving the code to do the fix just the general idea.
From what you have said above the black ring is for when n_object is less than n_air. This is only usually true if you are inside an object say if you were inside water or the like but materials have been constructed with weird properties like that and it should be supported.
In this type of situation there are rays of light that can't be diffracted as the diffraction formula put the refracted ray on the SAME side of the interface between the materials, which obviously doesn't make sense as diffraction. In this situation the surface will instead act like it's a reflective surface. This is the situation that is often referred to as total internal reflection.
If being fully exact then almost ever refractive object will also partially reflective too and the fraction of light that is reflected or transmitted (and therefore refracted) is given by the Fresnel equations. For this case though it would still be a good approximation to just treat is as reflective if the angle is too far and transmitting (and therefore refractive) otherwise.
Also there are situations where this black ring effect can be seen if reflection is not possible (due to it being dark in those directions) but light that is transmitted being possible. This could be done by say taking a tube of card that fits tightly to the edge of the object and is pointed directly away and only shining light inside the tube not outside.
Related
I have a problem when re-computing my surface normals of a mesh in open3d. The problem is that the estimation is not good enough and I don't know how to make it better. From the image below it cannot be seen that the mesh has a hole in the middle of the belly.
However, if the same mesh is seen from the side it clearly has a hole.
If I then use MeshLab to do the normal estimation the hole can suddenly clearly be seen. MeshLab support 4 different functions for the normal estimation of a mesh but the result are more or less the same no matter which function I use.
Here is the same mesh after estimating normals with MeshLab.
I find it very strange that open3d does no even come close to the accuracy of MeshLabs normal estimation, and I believe that it is most likely because I miss some important calculation before using open3d normal estimation function.
Here is the code which is use for the normal estimation in open3d:
void ReconstructionSystem::constructMeshDeformation(glm::vec3& intersectionPosition, glm::vec3& robotPosition) {
double depthOfProbe = glm::distance(intersectionPosition.z, robotPosition.z);
double affectedArea = 0.015 * (depthOfProbe*100.0); // 0.015 is a random value the function simply works good with it
if (affectedArea > 0.08) {
affectedArea = 0.08;
}
deformatedMesh = std::make_shared<open3d::geometry::TriangleMesh>(*final_mesh);
int i = 0;
for each (Eigen::Vector3d vertex in deformatedMesh->vertices_) {
glm::vec3 vex = glm::vec3(vertex.x(), vertex.y(), vertex.z());
double dist = glm::distance(vex, intersectionPosition);
if (dist < affectedArea) {
double ratio = dist / affectedArea;
double deformationAmount = glm::cos((2.0 * M_PI * ratio) / 4.0);
deformatedMesh->vertices_.at(i).z() -= depthOfProbe * deformationAmount;
}
i++;
}
*deformatedMesh = deformatedMesh->ComputeVertexNormals();
}
I am recently working with SFML libraries and I am trying to do a Space Shooter game from scratch. After some time working on it I get something that works fine but I am facing one issue and I do not know exactly how to proceed, so I hope your wisdom can lead me to a good solution. I will try to explain it the best I can:
Enemies following a path: currently in my game, I have enemies that can follow linear paths doing the following:
float vx = (float)m_wayPoints_v[m_wayPointsIndex_ui8].x - (float)m_pos_v.x;
float vy = (float)m_wayPoints_v[m_wayPointsIndex_ui8].y - (float)m_pos_v.y;
float len = sqrt(vx * vx + vy * vy);
//cout << len << endl;
if (len < 2.0f)
{
// Close enough, entity has arrived
//cout << "Has arrived" << endl;
m_wayPointsIndex_ui8++;
if (m_wayPointsIndex_ui8 >= m_wayPoints_v.size())
{
m_wayPointsIndex_ui8 = 0;
}
}
else
{
vx /= len;
vy /= len;
m_pos_v.x += vx * float(m_moveSpeed_ui16) * time;
m_pos_v.y += vy * float(m_moveSpeed_ui16) * time;
}
*m_wayPoints_v is a vector that basically holds the 2d points to be followed.
Related to this small piece of code, I have to say that is sometimes given me problems because getting closer to the next point becomes difficult as the higher the speed of the enemies is.
Is there any other way to be more accurate on path following independtly of the enemy speed? And also related to path following, if I would like to do an introduction of the enemies before each wave movement pattern starts (doing circles, spirals, ellipses or whatever before reaching the final point), for example:
For example, in the picture below:
The black line is the path I want a spaceship to follow before starting the IA pattern (move from left to right and from right to left) which is the red circle.
Is it done hardcoding all and each of the movements or is there any other better solution?
I hope I made myself clear on this...in case I did not, please let me know and I will give more details. Thank you very much in advance!
Way points
You need to add some additional information to the way points and the NPC's position in relationship to the way points.
The code snippet (pseudo code) shows how a set of way points can be created as a linked list. Each way point has a link and a distance to the next way point, and the total distance for this way point.
Then each step you just increase the NPC distance on the set of way points. If that distance is greater than the totalDistance at the next way point, follow the link to the next. You can use a while loop to search for the next way point so you will always be at the correct position no matter what your speed.
Once you are at the correct way point its just a matter of calculating the position the NPC is between the current and next way point.
Define a way point
class WayPoint {
public:
WayPoint(float, float);
float x, y, distanceToNext, totalDistance;
WayPoint next;
WayPoint addNext(WayPoint wp);
}
WayPoint::WayPoint(float px, float py) {
x = px; y = py;
distanceToNext = 0.0f;
totalDistance = 0.0f;
}
WayPoint WayPoint::addNext(WayPoint wp) {
next = wp;
distanceToNext = sqrt((next.x - x) * (next.x - x) + (next.y - y) * (next.y - y));
next.totalDistance = totalDistance + distanceToNext;
return wp;
}
Declaring and linking waypoints
WayPoint a(10.0f, 10.0f);
WayPoint b(100.0f, 400.0f);
WayPoint c(200.0f, 100.0f);
a.addNext(b);
b.addNext(c);
NPC follows way pointy path at any speed
WayPoint currentWayPoint = a;
NPC ship;
ship.distance += ship.speed * time;
while (ship.distance > currentWayPoint.next.totalDistance) {
currentWayPoint = currentWayPoint.next;
}
float unitDist = (ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext;
// NOTE to smooth the line following use the ease curve. See Bottom of answer
// float unitDist = sigBell((ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext);
ship.pos.x = (currentWayPoint.next.x - currentWayPoint.x) * unitDist + currentWayPoint.x;
ship.pos.y = (currentWayPoint.next.y - currentWayPoint.y) * unitDist + currentWayPoint.y;
Note you can link back to the start but be careful to check when the total distance goes back to zero in the while loop or you will end up in an infinite loop. When you pass zero recalc NPC distance as modulo of last way point totalDistance so you never travel more than one loop of way points to find the next.
eg in while loop if passing last way point
if (currentWayPoint.next.totalDistance == 0.0f) {
ship.distance = mod(ship.distance, currentWayPoint.totalDistance);
}
Smooth paths
Using the above method you can add additional information to the way points.
For example for each way point add a vector that is 90deg off the path to the next.
// 90 degh CW
offX = -(next.y - y) / distanceToNext; // Yes offX = - y
offY = (next.x - x) / distanceToNext; //
offDist = ?; // how far from the line you want to path to go
Then when you calculate the unitDist along the line between to way points you can use that unit dist to smoothly interpolate the offset
float unitDist = (ship.distance - currentWayPoint.totalDistance) / currentWayPoint.distanceToNext;
// very basic ease in and ease out or use sigBell curve
float unitOffset = unitDist < 0.5f ? (unitDist * 2.0f) * (unitDist * 2.0f) : sqrt((unitDist - 0.5f) * 2.0f);
float x = currentWayPoint.offX * currentWayPoint.offDist * unitOffset;
float y = currentWayPoint.offY * currentWayPoint.offDist * unitOffset;
ship.pos.x = (currentWayPoint.next.x - currentWayPoint.x) * unitDist + currentWayPoint.x + x;
ship.pos.y = (currentWayPoint.next.y - currentWayPoint.y) * unitDist + currentWayPoint.y + y;
Now if you add 3 way points with the first offDist a positive distance and the second a negative offDist you will get a path that does smooth curves as you show in the image.
Note that the actual speed of the NPC will change over each way point. The maths to get a constant speed using this method is too heavy to be worth the effort as for small offsets no one will notice. If your offset are too large then rethink your way point layout
Note The above method is a modification of a quadratic bezier curve where the control point is defined as an offset from center between end points
Sigmoid curve
You don't need to add the offsets as you can get some (limited) smoothing along the path by manipulating the unitDist value (See comment in first snippet)
Use the following to function convert unit values into a bell like curve sigBell and a standard ease out in curve. Use argument power to control the slopes of the curves.
float sigmoid(float unit, float power) { // power should be > 0. power 1 is straight line 2 is ease out ease in 0.5 is ease to center ease from center
float u = unit <= 0.0f ? 0.0f : (unit >= 1.0f ? 1.0f: unit); // clamp as float errors will show
float p = pow(u, power);
return p / (p + pow(1.0f - u, power));
}
float sigBell(float unit, float power) {
float u = unit < 0.5f ? unit * 2.0f : 1.0f - (unit - 0.5f) * 2.0f;
return sigmoid(u, power);
}
This doesn't answer your specific question. I'm just curious why you don't use the sfml type sf::Vector2 (or its typedefs 2i, 2u, 2f)? Seems like it would clean up some of your code maybe.
As far as the animation is concerned. You could consider loading the directions for the flight pattern you want into a stack or something. Then pop each position and move your ship to that position and render, repeat.
And if you want a sin-like flight path similar to your picture, you can find an equation similar to the flight path you like. Use desmos or something to make a cool graph that fits your need. Then iterate at w/e interval inputting each iteration into this equation, your results are your position at each iteration.
Well, I think I found one of the problems but I am not sure what the solution can be.
When using the piece of code I posted before, I found that there is a problem when reaching the destination point due to the speed value. Currently to move a space ship fluently, I need to set the speed to 200...which means that in these formulas:
m_pos_v.x += vx * float(m_moveSpeed_ui16) * time;
m_pos_v.y += vy * float(m_moveSpeed_ui16) * time;
The new position might exceed the "2.0f" tolerance so the space ship cannot find the destination point and it gets stuck because the minimum movement that can be done per frame (assuming 60fps) 200 * 1 / 60 = 3.33px. Is there any way this behavior can be avoided?
I've implemented a simple ray tracer and now I'm trying to implement reflections but objects are behaving as transparent.
Here is my code for getting the reflected ray.
ray* reflected = new ray();
reflected->direction = rayIn.direction - (2 * glm::dot(rayIn.direction, normal)) * normal;
reflected->origin = int_point + epsilon * normal;
outColor += ((int_object->reflectivity)*intersectray(*reflected, depth - 1));
Here are images With code:
Without code:
I'll edit the post if more code is needed.
Edit : It seems the problem is when I'm iterating through the objects in the scene. I insert the objects as
scene->add(sphere1);
scene->add(sphere2);
But when I change this to :
scene->add(sphere2);
scene->add(sphere1);
the output is correct.
Sphere 1 is at closer to camera than sphere 2 and they are not overlapping.
Problem was this part of code
for (objects in scene){
double intersection = (*objIterator)->intersect(rayIn,normal);
if (intersection < minDistance && intersection > epsilon )
{
minDistance = intersection;
int_object = *objIterator;
int_point = rayIn.origin + intersection * rayIn.direction + (epsilon * normal);
}}
Here normal is used later for other calculations but the first line update normal for current object intersection (Even if its not close). So I added a vector to store normal of the intersection object and used it later.
I've started coding a raytracer, but today I encounter a problem when dealing with reflection.
First, here is an image of the problem:
I only computed the object's reflected color (so no light effect is applied on the reflected object)
The problem is that distortion that I really don't understand.
I looked at the angle between my rayVector and the normalVector and it looks ok, the reflected vector also looks fine.
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyNormal = normal;
Vector copyView = ray;
copyNormal.makeUnit();
copyView.makeUnit();
cosAngle = copyView.scale(copyNormal);
return (-2.0 * cosAngle * normal + ray);
}
So for example when my ray is hitting the bottom of my sphere I have the following values:
cos: 1
ViewVector: [185.869,-2.44308,-26.3504]
NormalVector: [185.869,-2.44308,-26.3504]
ReflectedVector: [-185.869,2.44308,26.3504]
Bellow if the code that handles the reflection:
Color Rt::getReflectedColor(std::shared_ptr<SceneObj> obj, Camera camera,
Vector rayVec, double k, unsigned int pass) {
if (pass > 10)
return obj->getColor();
if (obj->getReflectionIndex() == 0) {
// apply effects
return obj->getColor();
}
Color cuColor(obj->getColor());
Color newColor(0);
Math math;
Vector view;
Vector normal;
Vector reflected;
Position impact;
std::pair<std::shared_ptr<SceneObj>, double> reflectedObj;
normal = math.calcNormalVector(camera.pos, obj, rayVec, k, impact);
view = Vector(impact.x, impact.y, impact.z) -
Vector(camera.pos.x, camera.pos.y, camera.pos.z);
reflected = math.calcReflectedVector(view, normal);
reflectedObj = this->getClosestObj(reflected, Camera(impact));
if (reflectedObj.second <= 0) {
cuColor.mix(0x000000, obj->getReflectionIndex());
return cuColor;
}
newColor = this->getReflectedColor(reflectedObj.first, Camera(impact),
reflected, reflectedObj.second, pass + 1);
// apply effects
cuColor.mix(newColor, obj->getReflectionIndex());
return newColor;
}
To calculate the normal and the reflected Vector:
Vector Math::calcReflectedVector(const Vector &ray,
const Vector &normal) const {
double cosAngle;
Vector copyRay = ray;
copyRay.makeUnit();
cosAngle = copyRay.scale(normal);
return (-2.0 * cosAngle * normal + copyRay);
}
Vector Math::calcNormalVector(Position pos, std::shared_ptr<SceneObj> obj,
Vector rayVec, double k, Position& impact) const {
const Position &objPos = obj->getPosition();
Vector normal;
impact.x = pos.x + k * rayVec.x;
impact.y = pos.y + k * rayVec.y;
impact.z = pos.z + k * rayVec.z;
obj->calcNormal(normal, impact);
return normal;
}
[EDIT1]
I have a new image, i removed the plane only to keep the spheres:
As you can see there is blue and yellow on the border of the sphere.
Thanks to neam I colored the sphere applying the following formula:
newColor.r = reflected.x * 127.0 + 127.0;
newColor.g = reflected.y * 127.0 + 127.0;
newColor.b = reflected.z * 127.0 + 127.0;
Bellow is the visual result:
Ask me if you need any information.
Thanks in advance
There are many little things with the example you provided. This may -- or may not -- answer your question, but as I suppose you're doing a raytracer for learning purposes (either at school or in your free time) I'll give you some hints.
you have two classes Vector and Position. It may well seems like it's a good idea, but why not seeing the position as the translation vector from the origin ? This would avoid some code duplication I think (except if you've done something like using Position = Vector;). You may also want to look at some libraries that does all the mathematical things for you (like glm could do). (and this way, you'll avoid some errors like naming your dot function scale())
you create a camera from the position (that is a really strange thing). Reflections doesn't involve any camera. In a typical raytracer, you have one camera {position + direction + fov + ...} and for each pixels of your image/reflections/refractions/..., you cast rays {origin + direction} (thus the name raytracer, which isn't cameratracer). The Camera class is usually tied to the concept of physical camera with things like focal, depth of field, aperture, chromatic aberration, ... whereas the ray is simply... a ray. (could be a ray from the plane where the output image is mapped to the first object, or a ray created from reflection, diffraction, scattering, ...).
and for the final point, I think that your error may comes from the Math::calcNormalVector(...) function. For a sphere at a position P and for an intersection point I, the normal N is: N = normalize(I - P);.
EDIT: seems like your problem comes from the Rt::getClosestObj. Everything else is looking fine
There's ton a websites/blogs/educative content online about creating a simple raytracer, so for the first two points I let them teach you. Take a look at glm.
If don't figure out what is wrong with calcNormalVector(...) please post its code :)
Did that works ?
I assume that your ray and normal vector are already normalized.
Vector Math::reflect(const Vector &ray, const Vector &normal) const
{
return ray - 2.0 * Math::dot(normal, ray) * normal;
}
Moreover, I can't understand with your provided code this call :
this->getClosestObj(reflected, Camera(obj->getPosition()));
That should be something like that no ?
this->getClosestObj(reflected, Camera(impact));
Hello I'm studying on a raytracing algorithm and I'm stuck at monte carlo algorithm. While rendering without area light my render output was correct but when i added area light implementation to the source code for generating soft shadow I've encountered a problem.
Here is the before-after output image.
When I moved blue sphere down the problem is continuing (notice that artifact continues when sphere along the white dotted line).
Note this sphere and arealight is the same z offset. When I bring blue sphere to front of screen, the artifact is gone. I think problem is caused by uniform sampling cone or sampling sphere function but not sure.
Here is function:
template <typename T>
CVector3<T> UConeSample(T u1, T u2, T costhetamax,
const CVector3<T>& x, const CVector3<T>& y, const CVector3<T>& z) {
T costheta = Math::Lerp(u1, costhetamax, T(1));
T sintheta = sqrtf(T(1) - costheta*costheta);
T phi = u2 * T(2) * T(M_PI);
return cosf(phi) * sintheta * x +
sinf(phi) * sintheta * y +
costheta * z;
}
I'm generating random float u1, u2 value from van Der Corput sequence.
This is sphere sampling method
CPoint3<float> CSphere::Sample(const CLightSample& ls, const CPoint3<float>& p, CVector3<float> *n) const {
// translate object to world space
CPoint3<float> pCentre = o2w(CPoint3<float>(0.0f));
CVector3<float> wc = Vector::Normalize(pCentre - p);
CVector3<float> wcx, wcy;
//create local coordinate system from wc for uniform sample cone
Vector::CoordinateSystem(wc, &wcx, &wcy);
//check if inside, epsilon val. this is true?
if (Point::DistSquare(p, pCentre) - radius*radius < 1e-4f)
return Sample(ls, n);
// Else outside evaluate cosinus theta value
float sinthetamax2 = radius * radius / Point::DistSquare(p, pCentre);
float costhetamax = sqrtf(Math::Max(0.0f, 1.0f - sinthetamax2));
// Surface properties
CSurfaceProps dg_sphere;
float thit, ray_epsilon;
CPoint3<float> ps;
//create ray direction from sampled point then send ray to sphere
CRay ray(p, Vector::UConeSample(ls.u1, ls.u2, costhetamax, wcx, wcy, wc), 1e-3f);
// Check intersection against sphere, fill surface properties and calculate hit point
if (!Intersect(ray, &thit, &ray_epsilon, &dg_sphere))
thit = Vector::Dot(pCentre - p, Vector::Normalize(ray.d));
// Evaluate surface normal
ps = ray(thit);
*n = CVector3<float>(Vector::Normalize(ps - pCentre));
//return sample point
return ps;
}
Does anyone have any suggestions? Thanks.
I solved the problem.
The problem is caused by RNG "Random Number Generator" algorithm in light sample class (require well distributed u1, u2: low discrepancy sampling).
Ray tracing needs more complex RNG (one of them "The Mersenne Twister" pseudorandom number generator) and good shuffling algorithm.
I hope it will help. Thanks to everyone who posted comments.