Raytracer won't render more than one instance of an object - c++

I'm writing a raytracer in C++ and I'm having quite a bit of trouble understanding why my output images don't contain all of the objects that should be there. Namely, I'm working with spheres and planes, and I can't draw more than one instance of each.
The object values are read in from an ASCII file (such as radius, location, normals, etc). Here's my intersect test code.
//check primary ray against each object
for(int size = 0; size < objList.size(); size++){
//if intersect
if(objList[size]->intersect(ray,origin,&t)){
if(t < minDist){ //check depth
minDist = t; //update depth
bestObj = size; //update closest object
}
}
}
vec3 intersection = origin + minDist*ray;
//figure out what to draw, if anything
color_t shadeColor;
if(bestObj != -1){ //valid object
//get base color
//using rgb color
if(objList[bestObj]->rgbColor != vec3(-1)){
shadeColor.r = objList[bestObj]->rgbColor.x;
shadeColor.g = objList[bestObj]->rgbColor.y;
shadeColor.b = objList[bestObj]->rgbColor.z;
}
//else using rgbf color
else if(objList[bestObj]->rgbfColor != vec4(-1)){
shadeColor.r = objList[bestObj]->rgbfColor.x;
shadeColor.g = objList[bestObj]->rgbfColor.y;
shadeColor.b = objList[bestObj]->rgbfColor.z;
//need to do something with alpha value
}
//else invalid color
else{
cout << "Invalid color." << endl;
}
//...the rest is just shadow and reflection tests. There are bugs here as well, but those are for another post
The above code is within a loop that checks for every pixel. 'ray' is the direction of the ray, and 'origin' is the origin of that ray. 'objList' is an stl vector that holds each object in the scene. I've tested to make sure that each object is actually getting put into the vector.
I know that my intersection tests are working...at least for the one object of each type that renders. I've had the program print to a file all the values that 'bestObj' ever gets, but it never seems to register that any of the objects other than the last one is a 'bestObj'. I realize that this is the problem, that no other object gets set as the 'bestObj', but I can't figure out why!
Any help would be appreciated :)

I figured out the problem, thanks to didierc. I'm not sure what he was really talking about, but it made me think about how I was handling my pointers. Indeed, though my vector was pushing back every object, I wasn't creating new objects for each time I pushed one back. This led to each sphere in the stl vector pointing to the same one (aka the last one read in from file)!

Related

Why is detail lost when computing shadow and reflections in my ray tracer

I am building a ray tracer and I am able to correctly render diffuse and specular parts of my sphere. When I come to calculate shadows and reflections however I end up with a very pixelated result as shown in the below image:
I can see that the shadow is cast in the correct place and if you zoom in the reflection is also visible but again pixelated. I call this method to determine if a pixel is in shade and it is also called recursively by my reflect ray method to determine the reflected colours.
RGBColour Scene::illumination(Ray incidentRay, Shape *closestShape, RGBColour shapeColour, Ray ray)
{
RGBColour diffuseLight = _backgroundColour;
RGBColour specularLight = _backgroundColour;
double projectionNormalToSource = 0.0;
for (int i = 0; i < _lightSources.size(); i++)
{
Ray shadowRay(incidentRay.Direction(), (_lightSources.at(i).GetPosition() - incidentRay.Direction()).UnitVector());
Vector surfaceNormal = closestShape->SurfaceNormal(incidentRay);
//lambertian shading.
projectionNormalToSource = surfaceNormal.ScalarProduct(shadowRay.Direction());
if (projectionNormalToSource > 0)
{
bool isShadow = false;
std::vector<double> shadowIntersections;
Ray temp(incidentRay.Direction(), (_lightSources.at(i).GetPosition() - incidentRay.Direction()));
for (int j = 0; j < _sceneObjects.size(); j++)
{
shadowIntersections.push_back(_sceneObjects.at(j)->Intersection(temp));
}
//Test each point to see if it is in shadow.
for (int j = 0; j < shadowIntersections.size(); j++)
{
if (shadowIntersections.at(j) != -1)
{
if (shadowIntersections.at(j) > _epsilon && shadowIntersections.at(j) <= temp.Direction().Magnitude() && closestShape != _sceneObjects.at(j))
{
isShadow = true;
}
break;
}
}
if (!isShadow)
{
diffuseLight = diffuseLight + (closestShape->Colour() * projectionNormalToSource * closestShape->DiffuseCoefficient() * _lightSources.at(i).DiffuseIntensity());
specularLight = specularLight + specularReflection(_lightSources.at(i), projectionNormalToSource, closestShape, incidentRay, temp, ray);
}
}
}
return diffuseLight + specularLight;
}
As I am able to correctly render the spheres apart from these aspects I am convinced the problem must lie within this particular method so I have not posted the others. What I think is happening is that where the pixel values retain their initial colour instead of being shaded I must incorrectly be calculating very small values or the other option is that the calculated ray did not intersect, however I do not think the latter option is valid otherwise the same intersection method would return incorrect results elsewhere in the program but as the spheres render correctly (excluding the shading and reflection).
So typically what causes results like this and can you spot any obvious logic errors in my method?
Edit: I have moved my light source in front and I can now see that the shadow appears to be correctly cast for the green sphere and the blue one becomes pixelated. So I think on any subsequent shape iterations something must not be updating correctly.
Edit 2: The first issue has been fixed and the shadows are now not pixelated, the resolution was to move the break statement into the if statement directly preceding it. The issue that the reflections are still pixelated still occurs currently.
Pixelation like this could occur due to numerical instability. An example: Suppose you calculate an intersection point that lies on a curved surface. You then use that point as the origin of a ray (a shadow ray, for example). You would assume that the ray wouldn't intersect that curved surface, but in practice it sometimes can. You could check for this by discarding such self intersections, but that could cause problems if you decide to implement concave shapes. Another approach could be to move the origin of the generated ray along its direction vector by some infinitesimally small amount, so that no unwanted self-intersection occurs.

Removeable lightsources like Minecraft

I have succeded with making lightsources like the ones in Minecraft and it came with a very good result. I have used the cellular automata method to create the following light.
But say I got 2 or more lightsources near each other and I want to remove one of them.
Can you recommend a way to recalculate only the affected tiles?
Here is a image showing one lightsource. http://i.stack.imgur.com/E0dqR.png
Below is my code for calculating a light source and all of its neighbors tiles.
void World::processNeighborLight(Tile *pCurrent, int pLightLevel, int *pIterationCount)
{
*pIterationCount += 1; // Just to keep track of how many iterations were made.
pCurrent->updateLight(pLightLevel);
int newLight = pLightLevel - 1;
if (newLight <= 0) return;
Tile *N = pCurrent->getRelative(sf::Vector2i(0, -1));
Tile *E = pCurrent->getRelative(sf::Vector2i(1, 0));
Tile *S = pCurrent->getRelative(sf::Vector2i(0, 1));
Tile *W = pCurrent->getRelative(sf::Vector2i(-1, 0));
if (N->getLightLevel() < newLight)
{
N->updateLight(newLight);
processNeighborLight(N, newLight, pIterationCount);
}
if (E->getLightLevel() < newLight)
{
E->updateLight(newLight);
processNeighborLight(E, newLight, pIterationCount);
}
if (S->getLightLevel() < newLight)
{
S->updateLight(newLight);
processNeighborLight(S, newLight, pIterationCount);
}
if (W->getLightLevel() < newLight)
{
W->updateLight(newLight);
processNeighborLight(W, newLight, pIterationCount);
}
}
You could, rather than having each cell store a light level, have it store instead a collection of (lightsource, lightlevel) pairs (expensive?), and similarly have each light source store a collection of (cell, lightlevel) pairs (cheap!).
void KillLight (LightSource & kill_me)
{
// All we really do is iterate through each illuminated cell, and remove this lightsource from
// their list of light sources
for (auto i = kill_me.cells.begin(); i != kill_me.cells.end(); ++i)
{
// The cell contains some kind of collection that contains either a list of lightsources that hit it or <lightsource, illumination level>
// pairs. All we need to do is remove this light from that collection and recalculate the cell's light level
i->lights->erase (kill_me); // Note light sources must be comparable objects.
i->RecalculateMaxIllumination(); // The cell needs to figure out which of its sources is brightest now.
}
// And then handle other lightsource removal cleanup actions. Probably just have this method be called by
// ~LightSource()
}
If having each cell store a list of light sources hitting it is too expensive, the impact of having each light source remember which cells it illuminates is still cheap. I can think of alternate solutions, but they all involve some kind of mapping from a given light source to the set of all cells it illuminates.
This assumes, of course, that your light sources are relatively few in number compared to the number of cells, and no really crazy luminous light sources which illuminate tens of thousands of cells.

C++: Unsure if code is multithreadable

I'm working on a small piece of code which takes a very large amount of time to complete, so I was thinking of multithreading it either with pthread (which I hardly understand but think I can master a lot quicker) or with some GPGPU implementation (probably OpenCL as I have an AMD card at home and the PCs I use at my office have various NVIDIA cards)
while(sDead < (unsigned long) nrPoints*nrPoints) {
pPoint1 = distrib(*rng);
pPoint2 = distrib(*rng);
outAxel = -1;
if(pPoint1 != pPoint2) {
point1 = space->getPointRef(pPoint1);
point2 = space->getPointRef(pPoint2);
outAxel = point1->influencedBy(point2, distThres);
if(outAxel == 0 || outAxel == 1)
sDead++;
else
sDead = 0;
}
i++;
}
Where distrib is a uniform_int_distribution with a = 0 and b = nrPoints-1.
For clarity, here is the structure I'm working with:
class Space{
vector<Point> points
(more stuff)
}
class Point {
vector<Coords> coordinates
(more stuff)
}
struct Coords{
char Range
bool TypeOfCoord
char Coord
}
The length of coordinates is the same for all Points and Point[x].Coord[y].Range == Point[z].Coord[y].Range for all x, y and z. The same goes for TypeOfCoord.
Some background: during each run of the while loop, two randomly drawn Points from space are tested for interaction. influencedBy() checks whether or not point1 and point2 are close enough to eachother (distance is dependent on some metric but it boils down to similarity in Coord. If the distance is smaller than distThres, interaction is possible) to interact. Interaction means that one of the Coord variables which doesn't equal the corresponding Coord in the other object is flipped to equal it. This decreases the distance between the Points but also changes the distance of the changed point to every other point in Space, hence my question of whether or not this is multithreadable. As I said, I'm a complete newbie to multithreading and I'm not sure if I can safely implement a function that chops this up, so I was looking for your input. Suggestions are also very welcome.
E: The influencedby() function (and the functions it in turn calls) can be found here. Functions that I did not include, such as getFeature() and getNrFeatures() are tiny and cannot possibly contribute much. Take note that I used generalised names for objects in this question but I might mess up or make it more confusing if I replace them in the other code, so I've left the original names there. For the record:
Space = CultSpace
Point = CultVec
Points = Points
Coordinates = Feats
Coords = Feature
TypeOfCoord = Nomin
Coord = Trait
(Choosing "Answer" because the format permits better presentation. Not quite what your're asking for, but let's clarify this first.)
Later
How often is the loop executed until this condition becomes true?
while(sDead < (unsigned long) nrPoints*nrPoints) {
Probably not a big gain, but:
pPoint1 = distrib(*rng);
do {
pPoint2 = distrib(*rng);
while( pPoint1 == pPoint2 );
outAxel = -1;
How costly is getPointRef? Linear search in Space?
point1 = space->getPointRef(pPoint1);
point2 = space->getPointRef(pPoint2);
outAxel = point1->influencedBy(point2, distThres);
Is it really necessary to recompute the "distance of the changed point to every other point in Space" immediately after a "flip"?

C++ Collision Detection doesn't work on last check?

Over the last few days I have been trying to implement simple collision detection of objects drawn using OpenGL.
With the aid of the Pearson, Computer Graphics with OpenGL I have managed to write the following function:
void move(){
if(check_collision(sprite,platform1) || check_collision(sprite,platform2)){ //if colliding...
if (downKeyPressed ){ y_Vel += speed; downKeyPressed = false;} //going down
else if(upKeyPressed ){ y_Vel -= speed; upKeyPressed = false;} //going up
else if(rightKeyPressed){ x_Vel -= speed; rightKeyPressed = false;} //going right
else if(leftKeyPressed ){ x_Vel += speed; leftKeyPressed = false;} //going left
} // always glitches on whatever is last else if above?!?!
else{ glTranslatef(sprite.x+x_Vel, sprite.y+y_Vel, 0.0); }
}
My sprite moves in accordance to keyboard inputs (the arrow keys). If it collides with a stationary object it stops moving and stays in its position.
So far, it works when colliding with the top, left side and bottom of the stationary object. Unfortunately (even though I use the same logic) the right hand side doesn't work and upon a collision the sprite is redrawn at its original x/y coordinates. I'm baffled.
As it turns out, which-ever is the last check in the move() function (the last else-if) is the one that doesn't work... I have swapped the left with the right and sure enough when left is then the last one and the one that plays up :(
Any advice or ideas on how I can improve this and stop it glitching?
Please excuse my naivety and amateur coding. I'm merely a self-taught beginning. Thanks.
You should not use an else if. There is a possibility that it is hitting a side and the top or the bottom in the same frame. Trying changing those to all ifs because you want to check each one. or at the least change it to this.
if( /* check top */)
{
}
else if( /* check bot */)
{
}
if( /* check right */ )
{
}
else if( /* check left */)
{
}
Also, you should avoid declaring global variables like Y_VEL and X_VEL as this creates confusion. You may just be doing this to get your feet wet but I would avoid it. Create a class for each object and then have the velocities as members of that class.
Well, it seems to me that you have an issue when it comes to translating the actual object.
Your move code states
if(there is a collision)
{
//do stuff
}
else
{
glTranslateF( );
}
SO, whenever there is a collision, the translate function never gets called.
My opinion is that you should pull the glTranslateF() call out of the else {...}, just have it get called every time. However, it seems you're using the exact same 'draw' function for every rectangle, not just the player sprite. You'll probably have to ind a way to distinguish between regular rectangles (such as the platforms) and the player rectangle. Perhaps the simplest way for you to implement this, would be to have two different drawSprite functions: one for regular platforms, and the other for the player. Only call the move() function from within the player's draw function (perhaps called drawPlayer()?)
I don't quite have the time to look over all of your code to make a more educated and specific suggestion at the moment; however, I'll be free later this evening if this question is still open and needing help.
As you've already figured out, the problem is related to glTranslate(). Since you operate on both sprite's position and velocity, you should repeatedly update the positions using the velocities. That is:
sprite.x += x_vel;
sprite.y += y_vel;
and do it simply all the time (i.e. by some timer or every frame, if the scne is redrawn repeatedly). A collision then is equivalent to changing the velocity vector (x_vel, y_vel) in some way to simulate the collision effect: either zero it to stop any movement at all, or change the velocity component sign (x_vel = -x_vel) to make it rebound in an absolutely elastic manner, or do something else that fits.
What happens now with glTranslate() is that x_vel, y_vel actually hold offsets from the starting position, but not velocities of movement.

Brute force collision detection for two objects too slow

I have a project to see if two objects (made of about 10,000 triangles each) collide using the brute force collision algorithm, rendered in OpenGL. The two objects are not moving. I will have to translate them to some positions and find e.g. 100 triangle collisions etc.
So far I have written a code that actually checks for line-plane intersection between these two models. If I got everything straight I need to check every edge of every triangle of the first model with the each plane of each triangle of the other model. This actually means 3 'for' loops that take hours to end. I suppose I must have something wrong or got the whole concept misunderstood.
for (int i=0; i<model1_faces.num; i++) {
for (int j=0; j<3; j++) {
x1[j] = model1_vertices[model1_faces[i].v[j]-1].x;
y1[j] = model1_vertices[model1_faces[i].v[j]-1].y;
z1[j] = model1_vertices[model1_faces[i].v[j]-1].z;
}
A.x = x1[0];
A.y = y1[0];
A.z = z1[0];
B.x = x1[1];
B.y = y1[1];
B.z = z1[1];
C.x = x1[2];
C.y = y1[2];
C.z = z1[2];
TriangleNormal = findNormalVector((B-A)*(C-A));
RayDirection = B-A;
for (int j=0; j<model2_faces.num; j++) {
PointOnPlane = model2_vertices[model2_faces[j].v[0]-1]; // Any point of the triangle
system("PAUSE");
float D1 = (A-PointOnPlane)^(TriangleNormal); // Distance from A to the plane of j triangle
float D2 = (B-PointOnPlane)^(TriangleNormal);
if ((D1*D2) >= 0) continue; // Line AB doesn't cross the triangle
if (D1==D2) continue; // Line parallel to the plane
CollisionVect = A + (RayDirection) * (-D1/(D2-D1));
Vector temp;
temp = TriangleNormal*(RayDirection);
if (temp^(CollisionVect-A) < 0) continue;
temp = TriangleNormal*(C-B);
if (temp^(CollisionVect-B) < 0) continue;
temp = TriangleNormal*(A-C);
if (temp^(CollisionVect-A) < 0) continue;
// If I reach this point I had a collision //
cout << "Had collision!!" << endl;
Also I do not know exactly where exactly should this function above be called. In my render function so that it runs continuously while rendering or just once, given the fact that I only need to check for a non-moving objects collision?
I would appreciate some explanation and if you're too busy or bored to see my code, just help me with understanding a bit more this whole concept.
As suggested already, you can use bounding volumes. To make best use of these, you can arrange your bounding volumes in an Octree, in which case the volumes are boxes.
At the outermost level, each bounding volume contains the entire object. So you can test whether the two objects might intersect by comparing their zero-level bounding volumes. Testing for intersection of two boxes where all the faces are axis-aligned is trivial.
The octree will index which faces belong to which subdivisions of the bounding volume. So some faces will of course belong to more than one volume and may be tested multiple times.
The benefit is you can prune away many of the brute-force tests that are guaranteed to fail by the fact that only a handful of your subvolumes will actually intersect. The actual intersection testing is of still brute-force, but is on a small subset of faces.
Brute force collision detection often does not scale, as you have noticed. :) The usual approach is to define a bounding volume that contains your models/shapes and simplifies the intersection calculations. Bounding volumes come in all shapes and sizes depending on your models. They can be spheres, boxes, etc.
In addition to defining bounding volumes, you'll want to detect collision in your update section of code, where you are most likely passing in some delta time. That delta time is often needed to determine how far objects need to move and if a collision occurred in that timeframe.