Ray vs ellipsoid intersection - c++

I am trying to implement ray vs ellipsoid intersection by "squishing" space and doing ray vs sphere:
create mat3 S with ellipsoid radius at diagonal
squish ray by multiplying start and direction by an inverse of S
intersect ray with sphere of radius 1.0 in local space
multiply hitPoint by S to unsquish it.
Here is ray vs sphere:
float P = glm::dot(dir, sphereCenter-start);
float L = glm::distance(start, sphereCenter);
float d = sqrt(L*L - P*P);
if (d < radius) {
float x0 = sqrt(1.f - d*d);
hitPoint = start + dir*(P - x0);
hitNormal = glm::normalize(hitPoint - sphereCenter);
}
else if (d == radius) {
hitPoint = start + dir*P;
hitNormal = glm::normalize(hitPoint - sphereCenter);
}
else {
return false;
}
if (glm::distance(start, hitPoint) > dist) return false;
return true;
Here is the squishing part:
glm::vec3 S = start;
glm::vec3 Dir = dir;
auto sphereCenter = thisEntity()->transform()->getPosition();
auto scale = thisEntity()->transform()->getScale();
glm::mat3 q = glm::mat3(0);
float x = _radius.x * scale.x;
float y = _radius.y * scale.y;
float z = _radius.z * scale.z;
q[0][0] = x;
q[1][1] = y;
q[2][2] = z;
glm::mat3 qI = glm::inverse(q);
S = qI * S;
Dir = qI * Dir;
//calculate hit point in world space squished
glm::vec3 hitPoint, hitNormal;
if (!IntersectionsMath::instance()->segmentVsSphere(sphereCenter, S, Dir, dist, 1.f, hitPoint, hitNormal)) return;
hitPoint = q * hitPoint;
hit.pushHit(hitPoint, hitNormal, this);
Current ray sphere code is for world position, i'm trying to make it work at the origin so it shouldn't matter. Ray vs regular sphere works fine, ellipsoid is the problem.
I spent a lot of time on this and something somewhere is wrong.

Problem:
The center of scaling matters.
Solution:
Perform the scaling about the center of the ellipsoid.
... and not the origin as you are doing right now. This is because, although the direction of the ray will be the same (it is just a directional vector), the relative displacement between the scaled source and center of the sphere will be different:
Scaling about origin (current code):
Source S' = qI * S, center C' = qI * C --- S' - C' = qI * (S - C)
Scaling about ellipsoid center (correct procedure):
Source S" = qI * (S - C), center C" = C --- S" - C" = qI * (S - C) - C
The two displacements differ by the position of the original ellipsoid; thus your current ray will likely miss / give false positives.
Corrected code:
// scale about the ellipsoid's position by subtracting before multiplying
// more appropriate name would be "ellipseCenter" to avoid confusion
S_ = qI * (S - sphereCenter);
// this ::normalize should really be in the intersection function
Dir_ = glm::normalize(qI * Dir);
// calculate hit point in world space squished
// ... but around the origin in the squashed coordinate system
glm::vec3 hitPoint, hitNormal;
if (!IntersectionsMath::instance()->segmentVsSphere(
glm::vec3::ZERO, S_, Dir_,
dist, 1.f,
hitPoint, hitNormal)) return;
// re-apply the offset
hitPoint = q * hitPoint + sphereCenter
// problem: hitNormal will not be correct for the ellipsoid when scaled
// solution: divide through each component by square of respective semi-axis
// (will provide proof upon request)
hitNormal.x /= (x * x); hitNormal.y /= (y * y); hitNormal.z /= (z * z);

Related

Ray transformation in a Ray - OBB intersection test

I've implemented an algorithm that tests for a Ray - AABB intersection and it works fine. But when I try to transform Ray to the AABB's local space (making this a Ray - OBB test), I can't get correct results. I've studied several forums and other resources, but still missing something. (Some sources suggesting to apply inverted transformation to the ray origin and its end, and only then calc. direction, other - to apply transformation to origin and direction). Can someone point in the right direction (no pun intended)?
Here goes two functions responsible for the math:
1) Calculating inverses and other things to perform tests
bool Ray::intersectsMesh(const Mesh& mesh, const Transformation& transform) {
float largestNearIntersection = std::numeric_limits<float>::min();
float smallestFarIntersection = std::numeric_limits<float>::max();
glm::mat4 modelTransformMatrix = transform.modelMatrix();
Box boundingBox = mesh.boundingBox();
glm::mat4 inverse = glm::inverse(transform.modelMatrix());
glm::vec4 newOrigin = inverse * glm::vec4(mOrigin, 1.0);
newOrigin /= newOrigin.w;
mOrigin = newOrigin;
mDirection = glm::normalize(inverse * glm::vec4(mDirection, 0.0));
glm::vec3 xAxis = glm::vec3(glm::column(modelTransformMatrix, 0));
glm::vec3 yAxis = glm::vec3(glm::column(modelTransformMatrix, 1));
glm::vec3 zAxis = glm::vec3(glm::column(modelTransformMatrix, 2));
glm::vec3 OBBTranslation = glm::vec3(glm::column(modelTransformMatrix, 3));
printf("trans x %f y %f z %f\n", OBBTranslation.x, OBBTranslation.y, OBBTranslation.z);
glm::vec3 delta = OBBTranslation - mOrigin;
bool earlyFalseReturn = false;
calculateIntersectionDistances(xAxis, delta, boundingBox.min.x, boundingBox.max.x, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
calculateIntersectionDistances(yAxis, delta, boundingBox.min.y, boundingBox.max.y, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
calculateIntersectionDistances(zAxis, delta, boundingBox.min.z, boundingBox.max.z, &largestNearIntersection, &smallestFarIntersection, &earlyFalseReturn);
if (smallestFarIntersection < largestNearIntersection || earlyFalseReturn) { return false; }
return true;
}
2) Helper function (probably not needed here as its relates only to AABB tests and works fine)
void Ray::calculateIntersectionDistances(const glm::vec3& axis,
const glm::vec3& delta,
float minPointOnAxis,
float maxPointOnAxis,
float *largestNearIntersection,
float *smallestFarIntersection,
bool *earlyFalseRerutn)
{
float divident = glm::dot(axis, delta);
float denominator = glm::dot(mDirection, axis);
if (fabs(denominator) > 0.001f) {
float t1 = (divident + minPointOnAxis) / denominator;
float t2 = (divident + maxPointOnAxis) / denominator;
if (t1 > t2) { std::swap(t1, t2); }
*smallestFarIntersection = std::min(t2, *smallestFarIntersection);
*largestNearIntersection = std::max(t1, *largestNearIntersection);
} else if (-divident + minPointOnAxis > 0.0 || -divident + maxPointOnAxis < 0.0) {
*earlyFalseRerutn = true;
}
}
As it turned out, the ray's world -> model transformation was correct. The bug was in the intersection test. I had to completely replace the intersection code, because I wasn't able to identify the bug in the old code, unfortunately.
Ray transformation code:
glm::mat4 inverse = glm::inverse(transform.modelMatrix());
glm::vec4 start = inverse * glm::vec4(mOrigin, 1.0);
glm::vec4 direction = inverse * glm::vec4(mDirection, 0.0);
direction = glm::normalize(direction);
And the Ray - AABB test was stolen from here

Ray casting in rotating fan configuration produces point cloud with curvature, how to eliminate curvature?

I'm attempting to perform an intersection test using ray casting (not sure if correct term so please forgive me if not) and am outputting the intersections as a point cloud, and the point cloud shows curvature (on the Z-axis only, the point cloud is completely flat on the Y axis, and the horizontal axis in this image is the X axis):
I borrowed concepts from the Scratchapixel site, specifically http://scratchapixel.com/lessons/3d-basic-rendering/minimal-ray-tracer-rendering-simple-shapes/ray-box-intersection.
Essentially, I am generating 16 rays, all with the same origin vector. The direction vectors start at +15 degrees on the YZ plane, and continue in increments of -2 degrees down to -15. I have an axis aligned bounding box that I am testing intersection with. I use a rotation transform to rotate the 16 rays CCW around the Z axis. I am performing the intersection test for all 16 rays each 0.1 degrees, and if it returns true, I add the point to the point cloud.
Here's my intersection code:
bool test_intersect(Box b, Ray r, Vec3f& intersect_point)
{
float txmin = 0.0f, txmax = 0.0f, tymin = 0.0f, tymax = 0.0f, tzmin = 0.0f, tzmax = 0.0f;
float t_min = 0.0f, t_max = 0.0f, t = 0.0f;
// Determine inverse direction of ray to alleviate 0 = -0 issues
Vec3f inverse_direction(1 / r.direction.x, 1 / r.direction.y, 1 / r.direction.z);
// Solving box_min/box_max0 = O + Dt
txmin = (b.box_min.x - r.origin.x) * inverse_direction.x;
txmax = (b.box_max.x - r.origin.x) * inverse_direction.x;
tymin = (b.box_min.y - r.origin.y) * inverse_direction.y;
tymax = (b.box_max.y - r.origin.y) * inverse_direction.y;
tzmin = (b.box_min.z - r.origin.z) * inverse_direction.z;
tzmax = (b.box_max.z - r.origin.z) * inverse_direction.z;
// Depending on direction of ray tmin may > tmax, so we may need to swap
if (txmin > txmax) std::swap(txmin, txmax);
if (tymin > tymax) std::swap(tymin, tymax);
if (tzmin > tzmax) std::swap(tzmin, tzmax);
t_min = txmin;
t_max = txmax;
// If t-value of a min is greater than t-value of max,
// we missed the object in that plane.
if ((t_min > tymax) || (tymin > t_max))
return false;
if (tymin > t_min)
t_min = tymin;
if (tymax < t_max)
t_max = tymax;
if ((t_min > tzmax) || (tzmin > t_max))
return false;
if (tzmin > t_min)
t_min = tzmin;
if (tzmax < t_max)
t_max = tzmax;
if (t_min > 0)
t = t_min;
else
if (t_max > 0)
t = t_max;
else
return false;
intersect_point.x = r.origin.x + r.direction.x * t;
intersect_point.y = r.origin.y + r.direction.y * t;
intersect_point.z = r.origin.z + r.direction.z * t;
return true;
}
And my rotation:
// Rotation around z axis, for rotating array and checking beam intersections
void transform_rotate_z(Vec3f& in_vector, float angle)
{
float radians = angle * (M_PI / 180);
float result_x = cos(radians) * in_vector.x + -sin(radians) * in_vector.y;
float result_y = sin(radians) * in_vector.x + cos(radians) * in_vector.y;
in_vector.x = result_x;
in_vector.y = result_y;
}
I have racked my brain for quite a while but I can't seem to determine how I can prevent this curvature, I'm sure I'm overlooking something simple. I'd be grateful for any help you can provide.

Refraction in Raytracing?

I've been working on my raytracer again. I added reflection and multithreading support. Currently I am working on adding refractions, but its only half working.
As you can see, there is a center sphere(without specular highlight), a reflecting sphere(to the right) and a refracting sphere(left). I'm pretty happy about reflections, it does look very good. For refractions its kinda working...the light is refracted and all shadows of the spheres are visible in the sphere(refraction index 1.4), but there is an outer black ring.
EDIT: Apparently the black ring gets bigger, and therefore the sphere smaller, when I increase the refraction index of the sphere. On the contrary, when decreasing the index of refraction, the Sphere gets larger and the black ring smaller...until, with index of refraction set to one, the ring totally disappears.
IOR = 1.9
IOR = 1.1
IOR = 1.00001
And interestingly enough at IOR = 1 the sphere loses its transparency and becomes white.
I think I covered total internal reflection and it is not the issue here.
Now the code:
I'm using the operator | for dot product, so (vec|vec) is a dot product and the operator ~ to invert vectors. The objects, both ligths and spheres are stored in Object **objects;.
Raytrace function
Colour raytrace(const Ray &r, const int &depth)
{
//first find the nearest intersection of a ray with an object
Colour finalColour = skyBlue *(r.getDirection()|Vector(0,0,-1)) * SKY_FACTOR;
double t, t_min = INFINITY;
int index_nearObj = -1;
for(int i = 0; i < objSize; i++)
{
if(!dynamic_cast<Light *>(objects[i]))//skip light src
{
t = objects[i]->findParam(r);
if(t > 0 && t < t_min)
{
t_min = t;
index_nearObj = i;
}
}
}
//no intersection
if(index_nearObj < 0)
return finalColour;
Vector intersect = r.getOrigin() + r.getDirection()*t_min;
Vector normal = objects[index_nearObj]->NormalAtIntersect(intersect);
Colour objectColor = objects[index_nearObj]->getColor();
Ray rRefl, rRefr; //reflected and refracted Ray
Colour refl = finalColour, refr = finalColour; //reflected and refracted colours
double reflectance = 0, transmittance = 0;
if(objects[index_nearObj]->isReflective() && depth < MAX_TRACE_DEPTH)
{
//handle reflection
rRefl = objects[index_nearObj]->calcReflectingRay(r, intersect, normal);
refl = raytrace(rRefl, depth + 1);
reflectance = 1;
}
if(objects[index_nearObj]->isRefractive() && depth < MAX_TRACE_DEPTH)
{
//handle transmission
rRefr = objects[index_nearObj]->calcRefractingRay(r, intersect, normal, reflectance, transmittance);
refr = raytrace(rRefr, depth + 1);
}
Ray rShadow; //shadow ray
bool shadowed;
double t_light = -1;
Colour localColour;
Vector tmpv;
//get material properties
double ka = 0.2; //ambient coefficient
double kd; //diffuse coefficient
double ks; //specular coefficient
Colour ambient = ka * objectColor; //ambient component
Colour diffuse, specular;
double brightness;
localColour = ambient;
//look if the object is in shadow or light
//do this by casting a ray from the obj and
// check if there is an intersection with another obj
for(int i = 0; i < objSize; i++)
{
if(dynamic_cast<Light *>(objects[i])) //if object is a light
{
//for each light
shadowed = false;
//create Ray to light
tmpv = objects[i]->getPosition() - intersect;
rShadow = Ray(intersect + (!tmpv) * BIAS, tmpv);
t_light = objects[i]->findParam(rShadow);
if(t_light < 0) //no imtersect, which is quite impossible
continue;
//then we check if that Ray intersects one object that is not a light
for(int j = 0; j < objSize; j++)
{
if(!dynamic_cast<Light *>(objects[j]) && j != index_nearObj)//if obj is not a light
{
t = objects[j]->findParam(rShadow);
//if it is smaller we know the light is behind the object
//--> shadowed by this light
if (t >= 0 && t < t_light)
{
// Set the flag and stop the cycle
shadowed = true;
break;
}
}
}
if(!shadowed)
{
rRefl = objects[index_nearObj]->calcReflectingRay(rShadow, intersect, normal);
//reflected ray from ligh src, for ks
kd = maximum(0.0, (normal|rShadow.getDirection()));
if(objects[index_nearObj]->getShiny() <= 0)
ks = 0;
else
ks = pow(maximum(0.0, (r.getDirection()|rRefl.getDirection())), objects[index_nearObj]->getShiny());
diffuse = kd * objectColor;// * objects[i]->getColour();
specular = ks * objects[i]->getColor();
brightness = 1 /(1 + t_light * DISTANCE_DEPENDENCY_LIGHT);
localColour += brightness * (diffuse + specular);
}
}
}
finalColour = localColour + (transmittance * refr + reflectance * refl);
return finalColour;
}
Now the function that calculates the refracted Ray, I used several different sites for resource, and each had similar algorithms. This is the best I could do so far. It may just be a tiny detail I'm not seeing...
Ray Sphere::calcRefractingRay(const Ray &r, const Vector &intersection,Vector &normal, double & refl, double &trans)const
{
double n1, n2, n;
double cosI = (r.getDirection()|normal);
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;//invert
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
n = n1/n2;
double sinT2 = n*n * (1.0 - cosI * cosI);
double cosT = sqrt(1.0 - sinT2);
//fresnel equations
double rn = (n1 * cosI - n2 * cosT)/(n1 * cosI + n2 * cosT);
double rt = (n2 * cosI - n1 * cosT)/(n2 * cosI + n2 * cosT);
rn *= rn;
rt *= rt;
refl = (rn + rt)*0.5;
trans = 1.0 - refl;
if(n == 1.0)
return r;
if(cosT*cosT < 0.0)//tot inner refl
{
refl = 1;
trans = 0;
return calcReflectingRay(r, intersection, normal);
}
Vector dir = n * r.getDirection() + (n * cosI - cosT)*normal;
return Ray(intersection + dir * BIAS, dir);
}
EDIT: I also changed the refraction index around.From
if(cosI > 0.0)
{
n1 = 1.0;
n2 = getRefrIndex();
normal = ~normal;
}
else
{
n1 = getRefrIndex();
n2 = 1.0;
cosI = -cosI;
}
to
if(cosI > 0.0)
{
n1 = getRefrIndex();
n2 = 1.0;
normal = ~normal;
}
else
{
n1 = 1.0;
n2 = getRefrIndex();
cosI = -cosI;
}
Then I get this, and almost the same(still upside down) with an index of refraction at 1!
And the reflection calculation:
Ray Sphere::calcReflectingRay(const Ray &r, const Vector &intersection, const Vector &normal)const
{
Vector rdir = r.getDirection();
Vector dir = rdir - 2 * (rdir|normal) * normal;
return Ray(intersection + dir*BIAS, dir);
//the Ray constructor automatically normalizes directions
}
So my question is: How do I fix the outer black circle? Which version is correct?
Help is greatly appreciated :)
This is compiled on Linux using g++ 4.8.2.
Warning: the following is a guess, not a certainty. I'd have to look at the code in more detail to be sure what's happening and why.
That said, it looks to me like your original code is basically simulating a concave lens instead of convex.
A convex lens is basically a magnifying lens, bringing light rays from a relatively small area into focus on a plane:
This also shows why the corrected code shows an upside-down image. The rays of light coming from the top on one side get projected to the bottom on the other (and vice versa).
Getting back to the concave lens though: a concave lens is a reducing lens that shows a wide angle of picture from in front of the lens:
If you look at the bottom right corner here, it shows what I suspect is the problem: especially with a high index of refraction, the rays of light trying to come into the lens intersect the edge of the lens itself. For all the angles wider than that, you're typically going to see a black ring, because the front edge of the lens is acting as a shade to prevent light from entering.
Increasing the index of refraction increases the width of that black ring, because the light is bent more, so a larger portion at the edges is intersecting the outer edge of the lens.
In case you care about how they avoid this with things like wide-angle camera lenses, the usual route is to use a meniscus lens, at least for the front element:
This isn't a panacea, but does at least prevent incoming light rays from intersecting the outer edge of the front lens element. Depending on exactly how wide an angle the lens needs to cover, it'll often be quite a bit less radical of a meniscus than this (and in some cases it'll be a plano-concave) but you get the general idea.
Final warning: of course, all of these are hand-drawn, and intended only to give general idea, not (for example) reflect the design of any particular lens, an element with any particular index of refraction, etc.
I stumbled across this exact issue as well when working on a ray tracer. #lightxbulb's comment about normalizing the ray direction vector fixed this problem for me.
Firstly, keep your code that computes the refraction indices prior to your edit. In other words, you should be seeing those black rings in your renderings.
Then, in your calcRefractingRay function where you compute cosI, use the dot product of normalize(r.getDirection()) and normal. Currently you're taking the dot product of r.getDirection() and normal.
Secondly, when you compute the refracted ray direction dir, use normalize(r.getDirection()) instead of r.getDirection(). Again, you're currently using
r.getDirection() in your calculation.
Also, there is an issue with the way you're checking for total internal reflection. You should check that the term you're taking the square root of (1.0 - sinT2) is non-negative before actually computing the square root.
Hope that helps!

opengl trackball

I am trying to rotate opengl scene using track ball. The problem i am having is i am getting rotations opposite to direction of my swipe on screen. Here is the snippet of code.
prevPoint.y = viewPortHeight - prevPoint.y;
currentPoint.y = viewPortHeight - currentPoint.y;
prevPoint.x = prevPoint.x - centerx;
prevPoint.y = prevPoint.y - centery;
currentPoint.x = currentPoint.x - centerx;
currentPoint.y = currentPoint.y - centery;
double angle=0;
if (prevPoint.x == currentPoint.x && prevPoint.y == currentPoint.y) {
return;
}
double d, z, radius = viewPortHeight * 0.5;
if(viewPortWidth > viewPortHeight) {
radius = viewPortHeight * 0.5f;
} else {
radius = viewPortWidth * 0.5f;
}
d = (prevPoint.x * prevPoint.x + prevPoint.y * prevPoint.y);
if (d <= radius * radius * 0.5 ) { /* Inside sphere */
z = sqrt(radius*radius - d);
} else { /* On hyperbola */
z = (radius * radius * 0.5) / sqrt(d);
}
Vector refVector1(prevPoint.x,prevPoint.y,z);
refVector1.normalize();
d = (currentPoint.x * currentPoint.x + currentPoint.y * currentPoint.y);
if (d <= radius * radius * 0.5 ) { /* Inside sphere */
z = sqrt(radius*radius - d);
} else { /* On hyperbola */
z = (radius * radius * 0.5) / sqrt(d);
}
Vector refVector2(currentPoint.x,currentPoint.y,z);
refVector2.normalize();
Vector axisOfRotation = refVector1.cross(refVector2);
axisOfRotation.normalize();
angle = acos(refVector1*refVector2);
I recommend artificially setting prevPoint and currentPoint to (0,0) (0,1) and then stepping through the code (with a debugger or with your eyes) to see if each part makes sense to you, and the angle of rotation and axis at the end of the block are what you expect.
If they are what you expect, then I'm guessing the error is in the logic that occurs after that. i.e. you then take the angle and axis and convert them to a matrix which gets multiplied to move the model. A number of convention choices happen in this pipeline --which if swapped can lead to the type of bug you're having:
Whether the formula assumes the angle is winding left or right handedly around the axis.
Whether the transformation is meant to rotate an object in the world or meant to rotate the camera.
Whether the matrix is meant to operate by multiplication on the left or right.
Whether rows or columns of matrices are contiguous in memory.

Getting a Virtual Trackball to work from any viewing angle

I am currently trying to work on getting my virtual trackball to work from any angle. When I am looking at it from the z axis, it seems to work fine. I hold my mouse down, and move the mouse up... the rotation will move accordingly.
Now, if I change my viewing angle / position of my camera and try to move my mouse. The rotation will occur as if I were looking from the z axis. I cannot come up with a good way to get this to work.
Here is the code:
void Renderer::mouseMoveEvent(QMouseEvent *e)
{
// Get coordinates
int x = e->x();
int y = e->y();
if (isLeftButtonPressed)
{
// project current screen coordinates onto hemi sphere
Point sphere = projScreenCoord(x,y);
// find axis by taking cross product of current and previous hemi points
axis = Point::cross(previousPoint, sphere);
// angle can be found from magnitude of cross product
double length = sqrt( axis.x * axis.x + axis.y * axis.y + axis.z * axis.z );
// Normalize
axis = axis / length;
double lengthPrev = sqrt( previousPoint.x * previousPoint.x + previousPoint.y * previousPoint.y + previousPoint.z * previousPoint.z );
double lengthCur = sqrt( sphere.x * sphere.x + sphere.y * sphere.y + sphere.z * sphere.z );
angle = asin(length / (lengthPrev * lengthCur));
// Convert into Degrees
angle = angle * 180 / M_PI;
// 'add' this rotation matrix to our 'total' rotation matrix
glPushMatrix(); // save the old matrix so we don't mess anything up
glLoadIdentity();
glRotatef(angle, axis[0], axis[1], axis[2]); // our newly calculated rotation
glMultMatrixf(rotmatrix); // our previous rotation matrix
glGetFloatv(GL_MODELVIEW_MATRIX, (GLfloat*) rotmatrix); // we've let OpenGL do our matrix mult for us, now get this result & store it
glPopMatrix(); // return modelview to its old value;
}
// Project screen coordinates onto a unit hemisphere
Point Renderer::projScreenCoord(int x, int y)
{
// find projected x & y coordinates
double xSphere = ((double)x/width)*2.0 - 1.0;
double ySphere = ( 1 - ((double)y/height)) * 2.0 - 1.0;
double temp = 1.0 - xSphere*xSphere - ySphere*ySphere;
// Do a check so you dont do a sqrt of a negative number
double zSphere;
if (temp < 0){ zSphere = 0.0;}
else
{zSphere = sqrt(temp);}
Point sphere(xSphere, ySphere, zSphere);
// return the point on the sphere
return sphere;
}
I am still fairly new at this. Sorry for the trouble and thanks for all the help =)
The usual way involves quaternions. E.g., in sample code originally from SGI.