Rotate vector to new base vector - c++

In raycaster I am developing I am trying to implement hemisphere random sampling, with option to rotate hemisphere to direction and then take random point.
First version worked fine because sampling was uniform, and change of direction was just swapping to other hemisphere, which was simple.
Vec3f UniformSampleSphere() {
const Vec2f& u = GetVec2f(); // get two random numbers
float z = 1 - 2 * u.x;
float r = std::sqrt(std::max((float)0, (float)1 - z * z));
float phi = 2 * PI_F * u.y;
return Vec3f(r * std::cos(phi), r * std::sin(phi), z);
}
Vec3f GetRandomOnHemiSphere(Vec3f direction) {
auto toReturn = GetRandomOnSphere();
if (Dot(toReturn - direction, toReturn) < 0)
toReturn = -toReturn;
return toReturn;
}
But with cosine weighted hemisphere sampling I am in trouble to rotate properly and find random direction in correctly rotated hemisphere.
On picture's left we can see what is working now, and on right is after applying magic rotation that is that big deal I want.
So final function will be something like this:
Vec3f GetRandomOnHemiSphere(Vec3f direction) {
auto toReturn = CosineSampleHemisphere();
/*
Some magic here that rotates to correct direction of hemisphere
*/
return toReturn;
}
I used code from Socine weighted hemisphere sampling.

Related

Simple Ray Tracing with Lambertian Shading, Confusion

I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong

C++ How to scale a shape and create an if function to not print if too big after scale?

given a shapes orignal centroid + vertices .. i.e. if its a triangle, i know all three vertices coords. How could i then create a scaling function with a scaling factor as a parameter as below.. however my current code is with error and the result are huge shapes, much more than what im scaling by (only want scale factor of 2).
void Shape::scale(double factor)
{
int x, y, xx, xy;
int disx, disy;
for (itr = vertices.begin(); itr != vertices.end(); ++itr) {
//translate obj to origin (0,0)
x = itr->getX() - centroid.getX();
y = itr->getY() - centroid.getY();
//finds distance between centroid and vertex
disx = x + itr->getX();
disy = y + itr->getY();
xx = disx * factor;
xy = disy * factor;
//translate obj back
xx = xx + centroid.getX();
xy = xy + centroid.getY();
//set new coord
itr->setX(xx);
itr->setY(xy);
}
}
I know of using iterations to run through the vertices, my main point of confusion is how can i do the maths between the factor to scale my shapes size?
this is how i declare and itialise a vertex
// could i possible do (scale*x,scale*y)? or would that be problematic..
vertices.push_back(Vertex(x, y));
Also.. the grid is i.e. 100x100. if a scaled shape was to be too big to fit into that grid, i want an exit from the scale function so that the shape wont be enlarged, how can this be done effectively? so far i have a for look but that just loops on vertices, so it will only stop those that would be outside the grid, instead of cancelling the entire shape which would be ideal
if my question is too broad, please ask and i shall edit further to standard
First thing you need to do is find the center of mass of your set of points. That is the arithmetic mean of the coordinates of your points. Then, for each point calculate the line between the center of mass and that point. Now the only thing left is to put the point on that line, but in factor * current_distance away, where current_distance is the distance from the mass center to the given point before rescaling.
void Shape::scale(double factor)
{
Vertex mass_center = Vertex(0., 0.);
for(int i = 0; i < vertices.size(); i++)
{
mass_center.x += vertices[i].x;
mass_center.y += vertices[i].y;
}
mass_center.x /= vertices.size();
mass_center.y /= vertices.size();
for(int i = 0; i < vertices.size(); i++)
{
//this is a vector that leads from mass center to current vertex
Vertex vec = Vertex(vertices[i].x - mass_center.x, vertices[i].y - mass_center.y);
vertices[i].x = mass_center.x + factor * vec.x;
vertices[i].y = mass_center.y + factor * vec.y;
}
}
If you already know the centroid of a shape and the vertexes are the distance from that point then scaling in rectangular coordinates is just multiplying the x and y components of each vertex by the appropriate scaling factor (with a negative value flipping the shape around the axis.
void Shape::scale(double x_factor, double y_factor){
for(auto i=0; i < verticies.size();++i){
verticies[i].x *= x_scale;
verticies[i].y *= y_scale;
}
}
You could then just overload this function with one that takes a single parameter and calls this function with the same value for x and y.
void Shape::scale(double factor){
Shape::scale(factor, factor);
}
If you're vertex values are not centered at the origin then you will also have to multiply those values by your scaling factor.

Optimizing a Ray Tracer

I'm tasked with optimizing the following ray tracer:
void Scene::RayTrace()
{
for (int v = 0; v < fb->h; v++) // all vertical pixels in framebuffer
{
calculateFPS(); // calculates the current fps and prints it
for (int u = 0; u < fb->w; u++) // all horizontal pixels in framebuffer
{
fb->Set(u, v, 0xFFAAAAAA); // background color
fb->SetZ(u, v, FLT_MAX); // sets the Z values to all be maximum at beginning
V3 ray = (ppc->c + ppc->a*((float)u + .5f) + ppc->b*((float)v + .5f)).UnitVector(); // gets the camera ray
for (int tmi = 0; tmi < tmeshesN; tmi++) // iterates over all triangle meshes
{
if (!tmeshes[tmi]->enabled) // doesn't render a tmesh if it's not set to be enabled
continue;
for (int tri = 0; tri < tmeshes[tmi]->trisN; tri++) // iterates over all triangles in the mesh
{
V3 Vs[3]; // triangle vertices
Vs[0] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 0]];
Vs[1] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 1]];
Vs[2] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 2]];
V3 bgt = ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); // I don't entirely understand what this does
if (bgt[2] < 0.0f || bgt[0] < 0.0f || bgt[1] < 0.0f || bgt[0] + bgt[1] > 1.0f)
continue;
if (fb->zb[(fb->h - 1 - v)*fb->w + u] < bgt[2])
continue;
fb->SetZ(u, v, bgt[2]);
float alpha = 1.0f - bgt[0] - bgt[1];
float beta = bgt[0];
float gamma = bgt[1];
V3 Cs[3]; // triangle vertex colors
Cs[0] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 0]];
Cs[1] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 1]];
Cs[2] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 2]];
V3 color = Cs[0] * alpha + Cs[1] * beta + Cs[2] * gamma;
fb->Set(u, v, color.GetColor()); // sets this pixel accordingly
}
}
}
fb->redraw();
Fl::check();
}
}
Two things:
I don't entirely understand what ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); does. Can anyone explain this, in terms of ray-tracing, to me? Here is the function inside my "Planar Pinhole Camera" class (this function was given to me):
V3 V3::IntersectRayWithTriangleWithThisOrigin(V3 r, V3 Vs[3])
{
M33 m; // 3X3 matrix class
m.SetColumn(0, Vs[1] - Vs[0]);
m.SetColumn(1, Vs[2] - Vs[0]);
m.SetColumn(2, r*-1.0f);
V3 ret; // Vector3 class
V3 &C = *this;
ret = m.Inverse() * (C - Vs[0]);
return ret;
}
The basic steps of this are apparent, I just don't see what it's actually doing.
How would I go about optimizing this ray-tracer from here? I've found something online about "kd trees," but I'm unsure how complex they are. Does anyone have some good resources on simple solutions for optimizing this? I've had some difficulty deciphering what's out there.
Thanks!
Probably the largest optimisation by far would be to use some sort of bounding volume hierarchy. Right now the code intersects all rays with all triangles of all objects. With a BVH, we instead ask: "given this ray, which triangles intersect?" This means that for each ray, you generally only need to test for intersection with a handful of primitives and triangles, rather than every single triangle in the scene.
IntersectRayWithTriangleWithThisOrigin
from the look of it
it creates inverse transform matrix from the triangle edges (triangle basis vectors are X,Y)
do not get the Z axis I would expect the ray direction there and not position of pixel (ray origin)
but can be misinterpreting something
anyway the inverse matrix computation is the biggest problem
you are computing it for each triangle per pixel that is a lot
faster would be having computed inverse transform matrix of each triangle before raytracing (once)
where X,Y are the basis and Z is perpendicular to booth of them facing always the same direction to camera
and then just transform your ray into it and check for limits of intersection
that is just matrix*vector and few ifs instead of inverse matrix computation
another way would be to algebraically solve ray vs. plane intersection
that should lead to much simpler equation then matrix inversion
after that is that just a mater of basis vector bound checking

Wave vector in 2 dimensions

So I'm trying to make the player shoot a bullet that goes towards the mouse in a wavey pattern. I can get the bullet to move in a wavey pattern (albeit not really how I predicted), but not towards the mouse.
Vector2 BulletFun::sine(Vector2 vec) {
float w = (2 * PI) / 1000; // Where 1000 is the period
float waveNum = (2 * PI) / 5; // Where 5 is the wavelength
Vector2 k(0.0F, waveNum);
float t = k.dot(vec) - (w * _time);
float x = 5 * cos(t); // Where 5 is the amplitude
float y = 5 * sin(t);
Vector2 result(x, y);
return result;
}
Right now the speed isn't much of a concern, that shouldn't be too much of a problem once I have this figured out. I do get some angle change, but it seems to be reversed and only 1/8th a circle.
I'm probably miscalculating something somewhere. I just kind of learned about wave vectors.
I've tried a few other things, such as 1 dimensional travelling waves and another thing involving adjusting a normal sine wave by vec. Which had more or less the same result.
Thanks!
EDIT:
vec is the displacement from the player's location to the mouse click location. The return is a new vector that is adjusted to follow a wave pattern, BulletFun::sine is called each time the bullet receives and update.
The setup is something like this:
void Bullet::update() {
_velocity = BulletFun::sine(_displacement);
_location.add(_velocity); // add is a property of Tuple
// which Vector2 and Point2 inherit
}
In pseudocode, what you need to do is the following:
waveVector = Vector2(travelDistance,amplitude*cos(2*PI*frequency*travelDistance/unitDistance);
cosTheta = directionVector.norm().dot(waveVector.norm());
theta = acos(cosTheta);
waveVector.rotate(theta);
waveVector.translate(originPosition);
That should compute the wave vector in a traditional coordinate frame, and then rotate it to the local coordinate frame of the direction vector (where the direction vector is the local x-axis), and then translate the wave vector relative to your desired origin position of the wave beam or whatever...
This will result in a function very similar to
Vector2
BulletFun::sine(Bullet _bullet, float _amplitude, float _frequency, float _unitDistance)
{
float displacement = _bullet.getDisplacement();
float omega = 2.0f * PI * _frequency * _displacement / _unitDistance;
// Compute the wave coordinate on the traditional, untransformed
// Cartesian coordinate frame.
Vector2 wave(_displacement, _amplitude * cos(omega));
// The dot product of two unit vectors is the cosine of the
// angle between them.
float cosTheta = _bullet.getDirection().normalize().dot(wave.normalize());
float theta = acos(cosTheta);
// Translate and rotate the wave coordinate onto
// the direction vector.
wave.translate(_bullet.origin());
wave.rotate(theta);
}

Rotating coordinates around an axis

I'm representing a shape as a set of coordinates in 3D, I'm trying to rotate the whole object around an axis (In this case the Z axis, but I'd like to rotate around all three once I get it working).
I've written some code to do this using a rotation matrix:
//Coord is a 3D vector of floats
//pos is a coordinate
//angles is a 3d vector, each component is the angle of rotation around the component axis
//in radians
Coord<float> Polymers::rotateByMatrix(Coord<float> pos, const Coord<float> &angles)
{
float xrot = angles[0];
float yrot = angles[1];
float zrot = angles[2];
//z axis rotation
pos[0] = (cosf(zrot) * pos[0] - (sinf(zrot) * pos[1]));
pos[1] = (sinf(zrot) * pos[0] + cosf(zrot) * pos[1]);
return pos;
}
The image below shows the object I'm trying to rotate (looking down the Z axis) before the rotation is attempted, each small sphere indicates one of the coordinates I'm trying to rotate
alt text http://www.cs.nott.ac.uk/~jqs/notsquashed.png
The rotation is performed for the object by the following code:
//loop over each coordinate in the object
for (int k=start; k<finish; ++k)
{
Coord<float> pos = mp[k-start];
//move object away from origin to test rotation around origin
pos += Coord<float>(5.0,5.0,5.0);
pos = rotateByMatrix(pos, rots);
//wrap particle position
//these bits of code just wrap the coordinates around if the are
//outside of the volume, and write the results to the positions
//array and so shouldn't affect the rotation.
for (int l=0; l<3; ++l)
{
//wrap to ensure torroidal space
if (pos[l] < origin[l]) pos[l] += dims[l];
if (pos[l] >= (origin[l] + dims[l])) pos[l] -= dims[l];
parts->m_hPos[k * 4 + l] = pos[l];
}
}
The problem is that when I perform the rotation in this way, with the angles parameter set to (0.0,0.0,1.0) it works (sort of), but the object gets deformed, like so:
alt text http://www.cs.nott.ac.uk/~jqs/squashed.png
which is not what I want. Can anyone tell me what I'm doing wrong and how I can rotate the entire object around the axis without deforming it?
Thanks
nodlams
Where you do your rotation in rotateByMatrix, you compute the new pos[0], but then feed that into the next line for computing the new pos[1]. So the pos[0] you're using to compute the new pos[1] is not the input, but the output. Store the result in a temp var and return that.
Coord<float> tmp;
tmp[0] = (cosf(zrot) * pos[0] - (sinf(zrot) * pos[1]));
tmp[1] = (sinf(zrot) * pos[0] + cosf(zrot) * pos[1]);
return tmp;
Also, pass the pos into the function as a const reference.
const Coord<float> &pos
Plus you should compute the sin and cos values once, store them in temporaries and reuse them.