3D Collision resolution, moving AABB + polyhedron - c++

I have been writing a game engine in my free time, but I've been stuck for a couple weeks trying to get collisions working.
Currently I am representing entities' colliders with AABBs, and the collider for a level is represented by a fairly simple (but not necessarily convex) polyhedron. All the drawing is sprite-based but the underlying collision code is 3D.
I've got AABB/triangle collision detection working using this algorithm, (which I can naively apply to every face in the level mesh), but I am stuck trying to resolve the collision after I have detected its existence.
The algorithm I came up with works pretty well, but there are some edge cases where it breaks. For example, walking straight into a sharp corner will always push the player to one side or the other. Or if a small colliding face happens to have a normal that is closer to the players direction of movement than all other faces, it will "pop" the player in that direction first, even though using the offset from a different face would have had a better result.
For reference, my current algorithm looks like:
Create list of all colliding faces
Sort list in increasing order of the angle between face normal and negative direction of entity movement (i.e. process faces with the most "stopping power" first)
For each colliding face in collision list:
scale = distance of collision along face normal
Entity position += face normal * scale
If no more collision:
break
And here's the implementation:
void Mesh::handleCollisions(Player& player) const
{
using Face = Face<int32_t>;
BoundingBox<float> playerBounds = player.getGlobalBounds();
Vector3f negPlayerDelta = -player.getDeltaPos(); // Negative because face norm should be opposite direction of player dir
auto comparator = [&negPlayerDelta](const Face& face1, const Face& face2) {
const Vector3f norm1 = face1.normal();
const Vector3f norm2 = face2.normal();
float closeness1 = negPlayerDelta.dot(norm1) / (negPlayerDelta.magnitude() * norm1.magnitude());
float closeness2 = negPlayerDelta.dot(norm2) / (negPlayerDelta.magnitude() * norm2.magnitude());
return closeness1 > closeness2;
};
std::vector<Face> collidingFaces;
for (const Face& face : _faces)
{
::Face<float> floatFace(face);
if (CollisionHelper::collisionBetween(playerBounds, floatFace))
{
collidingFaces.push_back(face);
}
}
if (collidingFaces.empty()) {
return;
}
// Process in order of "closeness" between player delta and face normal
std::sort(collidingFaces.begin(), collidingFaces.end(), comparator);
Vector3f totalOffset;
for (const Face& face : collidingFaces)
{
const Vector3f& norm = face.normal().normalized();
Point3<float> closestVert(playerBounds.xMin, playerBounds.yMin, playerBounds.zMin); // Point on AABB that is most negative in direction of norm
if (norm.x < 0)
{
closestVert.x = playerBounds.xMax;
}
if (norm.y < 0)
{
closestVert.y = playerBounds.yMax;
}
if (norm.z < 0)
{
closestVert.z = playerBounds.zMax;
}
float collisionDist = closestVert.vectorTo(face[0]).dot(norm); // Distance from closest vert to face
Vector3f offset = norm * collisionDist;
BoundingBox<float> newBounds(playerBounds + offset);
totalOffset += offset;
if (std::none_of(collidingFaces.begin(), collidingFaces.end(),
[&newBounds](const Face& face) {
::Face<float> floatFace(face);
return CollisionHelper::collisionBetween(newBounds, floatFace);
}))
{
// No more collision; we are done
break;
}
}
player.move(totalOffset);
Vector3f playerDelta = player.getDeltaPos();
player.setVelocity(player.getDeltaPos());
}
I have been messing with sorting the colliding faces by "collision distance in the direction of player movement", but I haven't yet figured out an efficient way to find that distance value for all faces.
Does anybody know of an algorithm that would work better for what I am trying to accomplish?

I'm quite suspicious with the first part of code. You modify the entity's position in each iteration, am I right? That might be able to explain the weird edge cases.
In a 2D example, if a square walks towards a sharp corner and collides with both walls, its position will be modified by one wall first, which makes it penetrate more into the second wall. And then the second wall changes its position using a larger scale value, so that it seems the square is pushed only by one wall.
If a collision happens where the surface S's normal is close to the player's movement, it will be handled later than all other collisions. Note that when dealing with other collisions, the player's position is modified, and likely to penetrate more into surface S. So at last the program deal with collision with surface S, which pops the player a lot.
I think there's a simple fix. Just compute the penetrations at once, and use a temporal variable to sum up all the displacement and then change the position with the total displacement.

Related

Frustum Culling Bug

So I've implemented Frustum Culling in my game engine and I'm experiencing a strange bug. I am rendering a building that is segmented into chunks and I'm only rendering the chunks which are in the frustum. My camera starts at around (-.033, 11.65, 2.2) and everything looks fine. I start moving around and there is no flickering. When I set a breakpoint in the frustum culling code I can see that it is indeed culling some of the meshes. Everything seems great. Then when I reach the center of the building, around (3.9, 4.17, 2.23) meshes start to disappear that are in view. The same is true on the other side as well. I can't figure out why this bug could exist.
I implement frustum culling by using the extraction method listed here Extracting View Frustum Planes (Gribb & Hartmann method). I had to use glm::inverse() rather than transpose as it suggested and I think the matrix math was given for row-major matrices so I flipped that. All in all my frustum plane calculation looks like
std::vector<Mesh*> render_meshes;
auto comboMatrix = proj * glm::inverse(view * model);
glm::vec4 p_planes[6];
p_planes[0] = comboMatrix[3] + comboMatrix[0]; //left
p_planes[1] = comboMatrix[3] - comboMatrix[0]; //right
p_planes[2] = comboMatrix[3] + comboMatrix[1]; //bottom
p_planes[3] = comboMatrix[3] - comboMatrix[1]; //top
p_planes[4] = comboMatrix[3] + comboMatrix[2]; //near
p_planes[5] = comboMatrix[3] - comboMatrix[2]; //far
for (int i = 0; i < 6; i++){
p_planes[i] = glm::normalize(p_planes[i]);
}
for (auto mesh : meshes) {
if (!frustum_cull(mesh, p_planes)) {
render_meshes.emplace_back(mesh);
}
}
I then decide to cull each mesh based on its bounding box (as calculated by ASSIMP with the aiProcess_GenBoundingBoxes flag) as follows (returning true means culled)
glm::vec3 vmin, vmax;
for (int i = 0; i < 6; i++) {
// X axis
if (p_planes[i].x > 0) {
vmin.x = m->getBBoxMin().x;
vmax.x = m->getBBoxMax().x;
}
else {
vmin.x = m->getBBoxMax().x;
vmax.x = m->getBBoxMin().x;
}
// Y axis
if (p_planes[i].y > 0) {
vmin.y = m->getBBoxMin().y;
vmax.y = m->getBBoxMax().y;
}
else {
vmin.y = m->getBBoxMax().y;
vmax.y = m->getBBoxMin().y;
}
// Z axis
if (p_planes[i].z > 0) {
vmin.z = m->getBBoxMin().z;
vmax.z = m->getBBoxMax().z;
}
else {
vmin.z = m->getBBoxMax().z;
vmax.z = m->getBBoxMin().z;
}
if (glm::dot(glm::vec3(p_planes[i]), vmin) + p_planes[i][3] > 0)
return true;
}
return false;
Any guidance?
Update 1: Normalizing the full vec4 representing the plane is incorrect as only the vec3 represents the normal of the plane. Further, normalization is not necessary for this instance as we only care about the sign of the distance (not the magnitude).
It is also important to note that I should be using the rows of the matrix not the columns. I am achieving this by replacing
p_planes[0] = comboMatrix[3] + comboMatrix[0];
with
p_planes[0] = glm::row(comboMatrix, 3) + glm::row(comboMatrix, 0);
in all instances.
You are using GLM incorrectly. As per the paper of Gribb and Hartmann, you can extract the plane equations as a sum or difference of different rows of the matrix, but in glm, mat4 foo; foo[n] will yield the n-th column (similiar to how GLSL is designed).
This here
for (int i = 0; i < 6; i++){
p_planes[i] = glm::normalize(p_planes[i]);
}
also doesn't make sense, since glm::normalize(vec4) will simply normalize a 4D vector. This will result in the plane to be shifted around along its normal direction. Only thexyz components must be brought to unit length, and w must be scaled accordingly. It is even explained in details in the paper itself. However, since you only need to know on which half-space a point lies, normalizing the plane equation is a waste of cycles, you only care about the sign, not the maginitude of the value anyway.
After following #derhass solution for normalizing the planes correctly for intersection tests you would do as follows
For bounding box plane intersection after projecting your box onto that plane which we call p and after calculating the midpoint of the box say m and after calculating the distance of that mid point from the plane say d to check for intersection we do
d<=p
But for frustum culling we just don't want our box to NOT intersect wih our frustum plane but we want it to be at -p distance from our plane and only then we know for sure that NO PART of our box is intersecting our plane that is
if(d<=-p)//then our box is fully not intersecting our plane so we don't draw it or cull it[d will be negative if the midpoint lies on the other side of our plane]
Similarly for triangles we have check if the distance of ALL 3 points of the triangle from the plane are negative.
To project a box onto a plane we take the 3 axises[x,y,z UNIT VECTORS] of the box,scale them by the boxes respective HALF width,height,depth and find the sum of each of their dot products[Take only the positive magnitude of each dot product NO SIGNED DISTANCE] with the planes normal which will be your 'p'
Not with the above approach for an AABB you can also cull against OOBB's with the same approach cause only the axises will change.
EDIT:
how to project a bounding box onto a plane?
Let's consider an AABB for our example
It has the following parameters
Lower extent Min(x,y,z)
Upper extent Max(x,y,z)
Up Vector U=(0,1,0)
Left Vector. L=(1,0,0)
Front Vector. F=(0,0,1)
Step 1: calculate half dimensions
half_width=(Max.x-Min.x)/2;
half_height=(Max.y-Min.y)/2;
half_depth=(Max.z-Min.z)/2;
Step 2: Project each individual axis of the box onto the plane normal,take only the positive magnitude of each dot product scaled by each half dimension and find the total sum. make sure both the box axis and the plane normal are unit vectors.
float p=(abs(dot(L,N))*half_width)+
(abs(dot(U,N))*half_height)+
(abs(dot(F,N))*half_depth);
abs() returns absolute magnitude we want it to be positive
because we are dealing with distances
Where N is the planes normal unit vector
Step 3: compute mid point of box
M=(Min+Max)/2;
Step 4: compute distance of the mid point from plane
d=dot(M,N)+plane.w
Step 5: do the check
d<=-p //return true i.e don't render or do culling
U can see how to use his for OOBB where the U,F,L vectors are the axises of the OOBB and the centre(mid point) and half dimensions are parameters you pass in manually
For an sphere as well you would calculate the distance of the spheres center from the plane (called d) but do the check
d<=-r //radius of the sphere
Put this in an function called outside(Plane,Bounds) which returns true if the bounds is fully outside the plane then for each of the 6 planes
bool is_inside_frustum()
{
for(Plane plane:frustum_planes)
{
if(outside(plane,AABB))
{
return false
}
}
return true;
}

How to handle incorrect index calculation for discretized ray tracing?

The situation si as follows. I am trying to implement a linear voxel search in a glsl shader for efficient voxel ray tracing. In toehr words, I have a 3D texture and I am ray tracing on it but I am trying to ray trace such that I only ever check voxels intersected by the ray once.
To this effect I have written a program with the following results:
Not efficient but correct:
The above image was obtained by adding a small epsilon ray multiple times and sampling from the texture on each iteration. Which produces the correct results but it's very inefficient.
That would look like:
loop{
start += direction*0.01;
sample(start);
}
To make it efficient I decided to instead implement the following lookup function:
float bound(float val)
{
if(val >= 0)
return voxel_size;
return 0;
}
float planeIntersection(vec3 ray, vec3 origin, vec3 n, vec3 q)
{
n = normalize(n);
if(dot(ray,n)!=0)
return (dot(q,n)-dot(n,origin))/dot(ray,n);
return -1;
}
vec3 get_voxel(vec3 start, vec3 direction)
{
direction = normalize(direction);
vec3 discretized_pos = ivec3((start*1.f/(voxel_size))) * voxel_size;
vec3 n_x = vec3(sign(direction.x), 0,0);
vec3 n_y = vec3(0, sign(direction.y),0);
vec3 n_z = vec3(0, 0,sign(direction.z));
float bound_x, bound_y, bound_z;
bound_x = bound(direction.x);
bound_y = bound(direction.y);
bound_z = bound(direction.z);
float t_x, t_y, t_z;
t_x = planeIntersection(direction, start, n_x,
discretized_pos+vec3(bound_x,0,0));
t_y = planeIntersection(direction, start, n_y,
discretized_pos+vec3(0,bound_y,0));
t_z = planeIntersection(direction, start, n_z,
discretized_pos+vec3(0,0,bound_z));
if(t_x < 0)
t_x = 1.f/0.f;
if(t_y < 0)
t_y = 1.f/0.f;
if(t_z < 0)
t_z = 1.f/0.f;
float t = min(t_x, t_y);
t = min(t, t_z);
return start + direction*t;
}
Which produces the following result:
Notice the triangle aliasing on the left side of some surfaces.
It seems this aliasing occurs because some coordinates are not being set to their correct voxel.
For example modifying the truncation part as follows:
vec3 discretized_pos = ivec3((start*1.f/(voxel_size)) - vec3(0.1)) * voxel_size;
Creates:
So it has fixed the issue for some surfaces and caused it for others.
I wanted to know if there is a way in which I can correct this truncation so that this error does not happen.
Update:
I have narrowed down the issue a bit. Observe the following image:
The numbers represent the order in which I expect the boxes to be visited.
As you can see for some of the points the sampling of the fifth box seems to be ommitted.
The following is the sampling code:
vec4 grabVoxel(vec3 pos)
{
pos *= 1.f/base_voxel_size;
pos.x /= (width-1);
pos.y /= (depth-1);
pos.z /= (height-1);
vec4 voxelVal = texture(voxel_map, pos);
return voxelVal;
}
yep that was the +/- rounding I was talking about in my comments somewhere in your previous questions related to this. What you need to do is having step equal to grid size in one of the axises (and test 3 times once for |dx|=1 then for |dy|=1 and lastly |dz|=1).
Also you should create a debug draw 2D slice through your map to actually see where the hits for a single specific test ray occurred. Now based on direction of ray in each axis you set the rounding rules separately. Without this you are just blindly patching one case and corrupting other two ...
Now actually Look at this (I linked it to your before but you clearly did not):
Wolf and Doom ray casting techniques
especially pay attention to:
On the right It shows you how to compute the ray step (your epsilon). You simply scale the ray direction so one of the coordinate is +/-1. For simplicity start with 2D slice through your map. The red dot is ray start position. Green is ray step vector for vertical grid lines hits and red is for horizontal grid lines hits (z will be analogically the same).
Now you should add the 2D overview of your map through some height slice that is visible (like on the image on the left) add a dot or marker to each intersection detected but distinguish between x,y and z hits by color. Do this for single ray only (I use the center of view ray). Fist handle view when you look at X+ directions than X- and when done move to Y,Z ...
In my GLSL volumetric 3D back raytracer I also linked you before look at these lines:
if (dir.x<0.0) { p+=dir*(((floor(p.x*n)-_zero)*_n)-ray_pos.x)/dir.x; nnor=vec3(+1.0,0.0,0.0); }
if (dir.x>0.0) { p+=dir*((( ceil(p.x*n)+_zero)*_n)-ray_pos.x)/dir.x; nnor=vec3(-1.0,0.0,0.0); }
if (dir.y<0.0) { p+=dir*(((floor(p.y*n)-_zero)*_n)-ray_pos.y)/dir.y; nnor=vec3(0.0,+1.0,0.0); }
if (dir.y>0.0) { p+=dir*((( ceil(p.y*n)+_zero)*_n)-ray_pos.y)/dir.y; nnor=vec3(0.0,-1.0,0.0); }
if (dir.z<0.0) { p+=dir*(((floor(p.z*n)-_zero)*_n)-ray_pos.z)/dir.z; nnor=vec3(0.0,0.0,+1.0); }
if (dir.z>0.0) { p+=dir*((( ceil(p.z*n)+_zero)*_n)-ray_pos.z)/dir.z; nnor=vec3(0.0,0.0,-1.0); }
they are how I did this. As you can see I use different rounding/flooring rule for each of the 6 cases. This way you handle case without corrupting the other. The rounding rule depends on a lot of stuff like how is your coordinate system offseted to (0,0,0) and more so it might be different in your code but the if conditions should be the same. Also as you can see I am handling this by offsetting the ray start position a bit instead of having these conditions inside the ray traversal loop castray.
That macro cast ray and look for intersections with grid and on top of that actually zsorts the intersections and use the first valid one (that is what l,ll are for and no other conditions or combination of ray results are needed). So my way of deal with this is cast ray for each type of intersection (x,y,z) starting on the first intersection with the grid for the same axis. You need to take into account the starting offset so the l,ll resembles the intersection distance to real start of ray not to offseted one ...
Also a good idea is to do this on CPU side first and when 100% working port to GLSL as in GLSL is very hard to debug things like this.

cocos2dx detect intersection with polygon sprite

I am using cocos2d-x 3.8.
I try to create two polygon sprites with the following code.
I know we can detect intersect with BoundingBox but is too rough.
Also, I know we can use Cocos2d-x C++ Physics engine to detect collisions but doesn't it waste a lot of resource of the mobile device? The game I am developing does not need physics engine.
is there a way to detect the intersect of polygon sprites?
Thank you.
auto pinfoTree = AutoPolygon::generatePolygon("Tree.png");
auto treeSprite= Sprite::create(pinfoTree);
treeSprite-> setPosition(width / 4 * 3 - 30 , height / 2 - 200);
this->addChild(treeSprite);
auto pinfoBird = AutoPolygon::generatePolygon("Bird.png");
auto Bird= Sprite::create(pinfoTree);
Bird->setPosition(width / 4 * 3, height / 2);
this->addChild(Bird)
This is a bit more complicated: AutoPolygon gives you a bunch of triangles - the PhysicsBody::createPolygon requires a convex polygon with clockwise winding… so these are 2 different things. The vertex count might even be limited. I think Box2d’s maximum count for 1 polygon is 8.
If you want to try this you’ll have to merge the triangles to form polygons. An option would be to start with one triangle and add more as long as the whole thing stays convex. If you can’t add any more triangles start a new polygon. Add all the polygons as PhysicsShapes to your physics body to form a compound object.
I would propose that you don’t follow this path because
Autopolygon is optimized for rendering - not for best fitting
physics - that is a difference. A polygon traced with Autopolygon will always be bigger than the original sprite - Otherwise you would see rendering artifacts.
You have close to no control over the generated polygons
Tracing the shape in the app will increase your startup time
Triangle meshes and physics outlines are 2 different things
I would try some different approach: Generate the collision shapes offline. This gives you a bunch of advantages:
You can generate and tweak the polygons in a visual editor e.g. by
using PhysicsEditor
Loading the prepares polygons is way faster
You can set additional parameters like mass etc
The solution is battle proven and works out of the box
But if you want to know how polygon intersect work. You can look at this code.
// Calculate the projection of a polygon on an axis
// and returns it as a [min, max] interval
public void ProjectPolygon(Vector axis, Polygon polygon, ref float min, ref float max) {
// To project a point on an axis use the dot product
float dotProduct = axis.DotProduct(polygon.Points[0]);
min = dotProduct;
max = dotProduct;
for (int i = 0; i < polygon.Points.Count; i++) {
flaot d = polygon.Points[i].DotProduct(axis);
if (d < min) {
min = dotProduct;
} else {
if (dotProduct> max) {
max = dotProduct;
}
}
}
}
// Calculate the distance between [minA, maxA] and [minB, maxB]
// The distance will be negative if the intervals overlap
public float IntervalDistance(float minA, float maxA, float minB, float maxB) {
if (minA < minB) {
return minB - maxA;
} else {
return minA - maxB;
}
}
// Check if polygon A is going to collide with polygon B.
public boolean PolygonCollision(Polygon polygonA, Polygon polygonB) {
boolean result = true;
int edgeCountA = polygonA.Edges.Count;
int edgeCountB = polygonB.Edges.Count;
float minIntervalDistance = float.PositiveInfinity;
Vector edge;
// Loop through all the edges of both polygons
for (int edgeIndex = 0; edgeIndex < edgeCountA + edgeCountB; edgeIndex++) {
if (edgeIndex < edgeCountA) {
edge = polygonA.Edges[edgeIndex];
} else {
edge = polygonB.Edges[edgeIndex - edgeCountA];
}
// ===== Find if the polygons are currently intersecting =====
// Find the axis perpendicular to the current edge
Vector axis = new Vector(-edge.Y, edge.X);
axis.Normalize();
// Find the projection of the polygon on the current axis
float minA = 0; float minB = 0; float maxA = 0; float maxB = 0;
ProjectPolygon(axis, polygonA, ref minA, ref maxA);
ProjectPolygon(axis, polygonB, ref minB, ref maxB);
// Check if the polygon projections are currentlty intersecting
if (IntervalDistance(minA, maxA, minB, maxB) > 0)
result = false;
return result;
}
}
The function can be used this way
boolean result = PolygonCollision(polygonA, polygonB);
I once had to program a collision detection algorithm where a ball was to collide with a rotating polygon obstacle. In my case the obstacles where arcs with certain thickness. and where moving around an origin. Basically it was rotating in an orbit. The ball was also rotating around an orbit about the same origin. It can move between orbits. To check the collision I had to just check if the balls angle with respect to the origin was between the lower and upper bound angles of the arc obstacle and check if the ball and the obstacle where in the same orbit.
In other words I used the various constrains and properties of the objects involved in the collision to make it more efficient. So use properties of your objects to cause the collision. Try using a similar approach depending on your objects

Why is detail lost when computing shadow and reflections in my ray tracer

I am building a ray tracer and I am able to correctly render diffuse and specular parts of my sphere. When I come to calculate shadows and reflections however I end up with a very pixelated result as shown in the below image:
I can see that the shadow is cast in the correct place and if you zoom in the reflection is also visible but again pixelated. I call this method to determine if a pixel is in shade and it is also called recursively by my reflect ray method to determine the reflected colours.
RGBColour Scene::illumination(Ray incidentRay, Shape *closestShape, RGBColour shapeColour, Ray ray)
{
RGBColour diffuseLight = _backgroundColour;
RGBColour specularLight = _backgroundColour;
double projectionNormalToSource = 0.0;
for (int i = 0; i < _lightSources.size(); i++)
{
Ray shadowRay(incidentRay.Direction(), (_lightSources.at(i).GetPosition() - incidentRay.Direction()).UnitVector());
Vector surfaceNormal = closestShape->SurfaceNormal(incidentRay);
//lambertian shading.
projectionNormalToSource = surfaceNormal.ScalarProduct(shadowRay.Direction());
if (projectionNormalToSource > 0)
{
bool isShadow = false;
std::vector<double> shadowIntersections;
Ray temp(incidentRay.Direction(), (_lightSources.at(i).GetPosition() - incidentRay.Direction()));
for (int j = 0; j < _sceneObjects.size(); j++)
{
shadowIntersections.push_back(_sceneObjects.at(j)->Intersection(temp));
}
//Test each point to see if it is in shadow.
for (int j = 0; j < shadowIntersections.size(); j++)
{
if (shadowIntersections.at(j) != -1)
{
if (shadowIntersections.at(j) > _epsilon && shadowIntersections.at(j) <= temp.Direction().Magnitude() && closestShape != _sceneObjects.at(j))
{
isShadow = true;
}
break;
}
}
if (!isShadow)
{
diffuseLight = diffuseLight + (closestShape->Colour() * projectionNormalToSource * closestShape->DiffuseCoefficient() * _lightSources.at(i).DiffuseIntensity());
specularLight = specularLight + specularReflection(_lightSources.at(i), projectionNormalToSource, closestShape, incidentRay, temp, ray);
}
}
}
return diffuseLight + specularLight;
}
As I am able to correctly render the spheres apart from these aspects I am convinced the problem must lie within this particular method so I have not posted the others. What I think is happening is that where the pixel values retain their initial colour instead of being shaded I must incorrectly be calculating very small values or the other option is that the calculated ray did not intersect, however I do not think the latter option is valid otherwise the same intersection method would return incorrect results elsewhere in the program but as the spheres render correctly (excluding the shading and reflection).
So typically what causes results like this and can you spot any obvious logic errors in my method?
Edit: I have moved my light source in front and I can now see that the shadow appears to be correctly cast for the green sphere and the blue one becomes pixelated. So I think on any subsequent shape iterations something must not be updating correctly.
Edit 2: The first issue has been fixed and the shadows are now not pixelated, the resolution was to move the break statement into the if statement directly preceding it. The issue that the reflections are still pixelated still occurs currently.
Pixelation like this could occur due to numerical instability. An example: Suppose you calculate an intersection point that lies on a curved surface. You then use that point as the origin of a ray (a shadow ray, for example). You would assume that the ray wouldn't intersect that curved surface, but in practice it sometimes can. You could check for this by discarding such self intersections, but that could cause problems if you decide to implement concave shapes. Another approach could be to move the origin of the generated ray along its direction vector by some infinitesimally small amount, so that no unwanted self-intersection occurs.

Raytracer Refraction Bug

I'm writing a raytracer in C++, and I've been having some issue with refractions. I'm rendering a sphere and a ground plane, and the sphere should refract. However, it looks more like a sphere within a sphere: the "outer" sphere looks to be shaded properly, but not refracting, while the "inner" sphere looks like it's being self-shadowed. Here's a link to what it looks like: http://imgur.com/QVGkeBT.
Here's the relevant code.
//inside main raytrace function
if(refraction > 0.0f){ //the surface is refractive
//calculate refraction vector
Ray refract(intersection,
objList[bestObj]->refractedRay(
ray.dir,intersection,&cos_theta,&R0));
//recurse
refrColor = raytrace(refract);
}
else{ //no refraction
refrColor = background;
}
//refractedRay(vec3,vec3,float*,float*)
//...initialize variables, do geometric transforms
//into air out of obj
if(dot(ray,normal) < 0){
n1 = ior;
n2 = 1.0f;
*cos = dot(ray,-normal);
}
//into obj out of air
else{
n1 = 1.0f;
n2 = ior;
*cos = dot(ray,normal);
normal = -normal;
}
//check value under sqrt
float n = n1/n2;
float disc = 1-(pow(n,2)*(1-pow(*cos,2)));
if(disc < 0){ //total internal reflection
return ray - 2*-(*cos)*normal; //reflection vector
}
return (n*ray)+(((n*(*cos))-sqrt(disc))*normal);
The sphere used to look worse, then I remembered to normalize my vectors and it looks like this. Previously, it looked like only the inner sphere all throughout. Inside the main raytrace function, I do the refraction the same way as reflection, just using the refracted ray instead. I've also tried modifying the incoming point of intersection and ray with epsilon to check for self-refracting as you can get in shadowing.
Any help would be appreciated :)
I haven't checked your refraction formulae, but this looks wrong:
//into air out of obj
if(dot(ray,normal) < 0){
n1 = ior;
n2 = 1.0f;
*cos = dot(ray,-normal);
}
If the dot product of the incident ray and the normal is less than zero, and assuming the normal points outwards of the object (which it probably should) then this case corresponds to air -> inside, so your refractive indices should be swapped. As it is now you are rendering a sphere with ior 1 / ior and since that refractive index is less than 1 you are observing total internal reflection on the edges.
Here is one of my implementations which you can take a look at to see if anything is missing (it has more features but you should be able to identify the parts you are interested in and check it your computations match). To me it looks all right so I think fixing the refractive indices should do it.
The nondeterministic pattern in the center of the sphere, though, definitely looks like self-intersection. Make sure that in the case of reflection, you push the reflected ray outside the intersected surface slightly, and in the case of refraction, push the refracted ray inside slightly, to avoid self-intersection.