Why is my Ray Tracer getting so much edge distortion? - c++

I am writing a ray tracer from scratch. The example is rendering two spheres using ray-sphere intersection detection. When the spheres are close to the center of the screen, they look fine. However, when I move the camera, or if I adjust the spheres position so they are closer to the edge, they become distorted.
This is the ray casting code:
void Renderer::RenderThread(int start, int span)
{
// pCamera holds the position, rotation, and fov of the camera
// pRenderTarget is the screen to render to
// calculate the camera space to world space matrix
Mat4 camSpaceMatrix = Mat4::Get3DTranslation(pCamera->position.x, pCamera->position.y, pCamera->position.z) *
Mat4::GetRotation(pCamera->rotation.x, pCamera->rotation.y, pCamera->rotation.z);
// use the cameras origin as the rays origin
Vec3 origin(0, 0, 0);
origin = (camSpaceMatrix * origin.Vec4()).Vec3();
// this for loop loops over all the pixels on the screen
for ( int p = start; p < start + span; ++p ) {
// get the pixel coordinates on the screen
int px = p % pRenderTarget->GetWidth();
int py = p / pRenderTarget->GetWidth();
// in ray tracing, ndc space is from [0, 1]
Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());
// in ray tracing, screen space is [-1, 1]
Vec2 screen(2 * ndc.x - 1, 1 - 2 * ndc.y);
// scale x by aspect ratio
screen.x *= (float)pRenderTarget->GetWidth() / pRenderTarget->GetHeight();
// scale screen by the field of view
// fov is currently set to 90
screen *= tan((pCamera->fov / 2) * (PI / 180));
// screen point is the pixels point in camera space,
// give a z value of -1
Vec3 camSpace(screen.x, screen.y, -1);
camSpace = (camSpaceMatrix * camSpace.Vec4()).Vec3();
// the rays direction is its point on the cameras viewing plane
// minus the cameras origin
Vec3 dir = (camSpace - origin).Normalized();
Ray ray = { origin, dir };
// find where the ray intersects with the spheres
// using ray-sphere intersection algorithm
Vec4 color = TraceRay(ray);
pRenderTarget->PutPixel(px, py, color);
}
}
The FOV is set to 90. I have seen where other people have had this problem but it was because they were using a very high FOV value. I don't think there should be issues with 90. This issue persists even if the camera is not moved at all. Any object close to the edge of the screen appears distorted.

When in doubt, you can always check out what other renderers are doing. I always compare my results and settings to Blender. Blender 2.82, for example, has a default field of view of 39.6 degrees.
I also feel inclined to point out that this is wrong:
Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());
If you want to get the center of the pixel, then it should be 0.5f:
Vec2 ndc((px + 0.5f) / pRenderTarget->GetWidth(), (py + 0.5f) / pRenderTarget->GetHeight());
Also, and this is really a nit-picky kind of thing, your intervals are open intervals and not closed ones (as you mentioned in the source comments.) The image plane coordinates never reach 0 or 1 and your camera space coordinates are never fully -1 or 1. Eventually, then the image plane coordinates are converted to pixel coordinates, it is left-closed interval [0, width) and [0, height).
Good luck on your ray tracer!

Related

X,Y position of semi cylinder - ray - triangle- intersection to space [-1,1] [-1,1]

I am rendering a tile map to a fbo and then moving the resulted buffer to a texture and rendering it on a FSQ. Then from the mouse click events, I got the screen coordinates and move them to clip space [-1,1]:
glm::vec2 posMouseClipSpace((2.0f * myCursorPos.x) / myDeviceWidth -
1.0f, 1.0f - (2.0f * myCursorPos.y) / myDeviceHeight);
I have logic on my program that based on those coordinates, it selects a specific tile on the texture.
Now, moving to 3D, I am texturing a semi cylinder with the FBO I used in the previous step:
In this case I am using a ray-triangle intersection point that hits the cylinder with radius r and height h. The idea is moving this intersection point to space [-1,1] so I can keep the logic on my program to select tiles
I use the Möller–Trumbore algorithm to check points on the cylinder hit by a ray. Lets say the intersected point is (x,y) (not sure if the point is in triangle, object or world space. Apparently it's worldspace).
I want to translate that point to space x:[-1,1], y[-1,1].
I know the height of my cylinder, which is a quarter of the cylinder's arc length:
cylinderHeight = myRadius * (PI/2);
so the point in the Y axis can be set in [-1,1]space:
vec2.y = (2.f * (intersectedPoint.y - myCylinder->position().y) ) /
(myCylinder->height()) - 1.f
and That works perfectly.
However, How to compute the horizontal axis which depends on 2 variables x and z?
Currently, my cylinder's radius is 1, so by coincidence a semi cylinder set in the origin would go from (-1 ,1) on the X axis, which made me think it was [-1,1] space, but it turns out is not.
My next approach was using the arc length of a semi circle s =r * PI and then plug that value into the equation:
vec2.x = (2.f * (intersectedPoint.x - myCylinder->position().x) ) /
(myCylinder->arcLength()) - 1.f
but clearly it goes off by 1 unit on the negative direction.
I appreciate the help.
From your description, it seems that you want to convert the world space intersection coordinate to its corresponding normalized texture coordinate.
For this you need the Z coordinate as well, as there must be two "horizontal" coordinates. However you don't need the arc length.
Using the relative X and Z coordinates of intersectedPoint, calculate the polar angle using atan2, and divide by PI (the angular range of the semi-circle arc):
vec2.x = atan2(intersectedPoint.z - myCylinder->position().z,
myCylinder->position().x - intersectedPoint.x) / PI;

Transparent sphere is mostly black after implementing refraction in a ray tracer

NOTE: I've edited my code. See below the divider.
I'm implementing refraction in my (fairly basic) ray tracer, written in C++. I've been following (1) and (2).
I get the result below. Why is the center of the sphere black?
The center sphere has a transmission coefficient of 0.9 and a reflective coefficient of 0.1. It's index of refraction is 1.5 and it's placed 1.5 units away from the camera. The other two spheres just use diffuse lighting, with no reflective/refraction component. I placed these two different coloured spheres behind and in front of the transparent sphere to ensure that I don't see a reflection instead of a transmission.
I've made the background colour (the colour achieved when a ray from the camera does not intersect with any object) a colour other than black, so the center of the sphere is not just the background colour.
I have not implemented the Fresnel effect yet.
My trace function looks like this (verbatim copy, with some parts omitted for brevity):
bool isInside(Vec3f rayDirection, Vec3f intersectionNormal) {
return dot(rayDirection, intersectionNormal) > 0;
}
Vec3f trace(Vec3f origin, Vec3f ray, int depth) {
// (1) Find object intersection
std::shared_ptr<SceneObject> intersectionObject = ...;
// (2) Compute diffuse and ambient color contribution
Vec3f color = ...;
bool isTotalInternalReflection = false;
if (intersectionObject->mTransmission > 0 && depth < MAX_DEPTH) {
Vec3f transmissionDirection = refractionDir(
ray,
normal,
1.5f,
isTotalInternalReflection
);
if (!isTotalInternalReflection) {
float bias = 1e-4 * (isInside(ray, normal) ? -1 : 1);
Vec3f transmissionColor = trace(
add(intersection, multiply(normal, bias)),
transmissionDirection,
depth + 1
);
color = add(
color,
multiply(transmissionColor, intersectionObject->mTransmission)
);
}
}
if (intersectionObject->mSpecular > 0 && depth < MAX_DEPTH) {
Vec3f reflectionDirection = computeReflectionDirection(ray, normal);
Vec3f reflectionColor = trace(
add(intersection, multiply(normal, 1e-5)),
reflectionDirection,
depth + 1
);
float intensity = intersectionObject->mSpecular;
if (isTotalInternalReflection) {
intensity += intersectionObject->mTransmission;
}
color = add(
color,
multiply(reflectionColor, intensity)
);
}
return truncate(color, 1);
}
If the object is transparent then it computes the direction of the transmission ray and recursively traces it, unless the refraction causes total internal reflection. In that case, the transmission component is added to the reflection component and thus the color will be 100% of the traced reflection color.
I add a little bias to the intersection point in the direction of the normal (inverted if inside) when recursively tracing the transmission ray. If I don't do that, then I get this result:
The computation for the direction of the transmission ray is performed in refractionDir. This function assumes that we will not have a transparent object inside another, and that the outside material is air, with a coefficient of 1.
Vec3f refractionDir(Vec3f ray, Vec3f normal, float refractionIndex, bool &isTotalInternalReflection) {
float relativeIndexOfRefraction = 1.0f / refractionIndex;
float cosi = -dot(ray, normal);
if (isInside(ray, normal)) {
// We should be reflecting across a normal inside the object, so
// re-orient the normal to be inside.
normal = multiply(normal, -1);
relativeIndexOfRefraction = refractionIndex;
cosi *= -1;
}
assert(cosi > 0);
float base = (
1 - (relativeIndexOfRefraction * relativeIndexOfRefraction) *
(1 - cosi * cosi)
);
if (base < 0) {
isTotalInternalReflection = true;
return ray;
}
return add(
multiply(ray, relativeIndexOfRefraction),
multiply(normal, relativeIndexOfRefraction * cosi - sqrtf(base))
);
}
Here's the result when the spheres are further away from the camera:
And closer to the camera:
Edit: I noticed a couple bugs in my code.
When I add bias to the intersection point, it should be in the same direction as the transmission. I was adding it in the wrong direction by adding negative bias when inside the sphere. This doesn't make sense as when the ray is coming from inside the sphere, it will transmit outside the sphere (when TIR is avoided).
Old code:
add(intersection, multiply(normal, bias))
New code:
add(intersection, multiply(transmissionDirection, 1e-4))
Similarly, the normal that refractionDir receives is the surface normal pointing away from the center of the sphere. The normal I want to use when computing the transmission direction is one pointing outside if the transmission ray is going to go outside the object, or inside if the transmission ray is going to go inside the object. Thus, the surface normal pointing out of the sphere should be inverted if we're entering the sphere, thus is the ray is outside.
New code:
Vec3f refractionDir(Vec3f ray, Vec3f normal, float refractionIndex, bool &isTotalInternalReflection) {
float relativeIndexOfRefraction;
float cosi = -dot(ray, normal);
if (isInside(ray, normal)) {
relativeIndexOfRefraction = refractionIndex;
cosi *= -1;
} else {
relativeIndexOfRefraction = 1.0f / refractionIndex;
normal = multiply(normal, -1);
}
assert(cosi > 0);
float base = (
1 - (relativeIndexOfRefraction * relativeIndexOfRefraction) * (1 - cosi * cosi)
);
if (base < 0) {
isTotalInternalReflection = true;
return ray;
}
return add(
multiply(ray, relativeIndexOfRefraction),
multiply(normal, sqrtf(base) - relativeIndexOfRefraction * cosi)
);
}
However, this all still gives me an unexpected result:
I've also added some unit tests. They pass the following:
A ray entering the center of the sphere parallel with the normal will transmit through the sphere without being bent (this tests two refractionDir calls, one outside and one inside).
Refraction at 45 degrees from the normal through a glass slab will bend inside the slab by 15 degrees towards the normal, away from the original ray direction. Its direction when it exits the sphere will be the original ray direction.
Similar test at 75 degrees.
Ensuring that total internal reflection happens when a ray is coming from inside the object and is at 45 degrees or wider.
I'll include one of the unit tests here and you can find the rest at this gist.
TEST_CASE("Refraction at 75 degrees from normal through glass slab") {
Vec3f rayDirection = normalize(Vec3f({ 0, -sinf(5.0f * M_PI / 12.0f), -cosf(5.0f * M_PI / 12.0f) }));
Vec3f normal({ 0, 0, 1 });
bool isTotalInternalReflection;
Vec3f refraction = refractionDir(rayDirection, normal, 1.5f, isTotalInternalReflection);
REQUIRE(refraction[0] == 0);
REQUIRE(refraction[1] == Approx(-sinf(40.0f * M_PI / 180.0f)).margin(0.03f));
REQUIRE(refraction[2] == Approx(-cosf(40.0f * M_PI / 180.0f)).margin(0.03f));
REQUIRE(!isTotalInternalReflection);
refraction = refractionDir(refraction, multiply(normal, -1), 1.5f, isTotalInternalReflection);
REQUIRE(refraction[0] == Approx(rayDirection[0]));
REQUIRE(refraction[1] == Approx(rayDirection[1]));
REQUIRE(refraction[2] == Approx(rayDirection[2]));
REQUIRE(!isTotalInternalReflection);
}

How to calculate 3D point (World Coordinate) from 2D mouse clicked screen coordinate point in Openscenegraph?

I was trying to place a sphere on the 3D space from the user selected point on 2d screen space. For this iam trying to calculate 3d point from 2d point using the below technique and this technique not giving the correct solution.
mousePosition.x = ((clickPos.clientX - window.left) / control.width) * 2 - 1;
mousePosition.y = -((clickPos.clientY - window.top) / control.height) * 2 + 1;
then Iam multiplying the mousePositionwith Inverse of MVP matrix. But getting random number at result.
for calculating MVP Matrix :
osg::Matrix mvp = _camera->getViewMatrix() * _camera->getProjectionMatrix();
How can I proceed? Thanks.
Under the assumption that the mouse position is normalized in the range [-1, 1] for x and y, the following code will give you 2 points in world coordinates projected from your mouse coords: nearPoint is the point in 3D lying on the camera frustum near plane, farPointon the frustum far plane.
Than you can compute a line passing by these points and intersecting that with your plane.
// compute the matrix to unproject the mouse coords (in homogeneous space)
osg::Matrix VP = _camera->getViewMatrix() * _camera->getProjectionMatrix();
osg::Matrix inverseVP;
inverseVP.invert(VP);
// compute world near far
osg::Vec3 nearPoint(mousePosition.x, mousePosition.x, -1.0f);
osg::Vec3 farPoint(mousePosition.x, mousePosition.x, 1.0f);
osg::Vec3 nearPointWorld = nearPoint * inverseVP;
osg::Vec3 farPointWorld = farPoint * inverseVP;

Wrong aspect ratio calculations for camera (simple ray-caster)

I am working on some really simple ray-tracer.
For now I am trying to make the perspective camera works properly.
I use such loop to render the scene (with just two, hard-coded spheres - I cast ray for each pixel from its center, no AA applied):
Camera * camera = new PerspectiveCamera({ 0.0f, 0.0f, 0.0f }/*pos*/,
{ 0.0f, 0.0f, 1.0f }/*direction*/, { 0.0f, 1.0f, 0.0f }/*up*/,
buffer->getSize() /*projectionPlaneSize*/);
Sphere * sphere1 = new Sphere({ 300.0f, 50.0f, 1000.0f }, 100.0f); //center, radius
Sphere * sphere2 = new Sphere({ 100.0f, 50.0f, 1000.0f }, 50.0f);
for(int i = 0; i < buffer->getSize().getX(); i++) {
for(int j = 0; j < buffer->getSize().getY(); j++) {
//for each pixel of buffer (image)
double centerX = i + 0.5;
double centerY = j + 0.5;
Geometries::Ray ray = camera->generateRay(centerX, centerY);
Collision * collision = ray.testCollision(sphere1, sphere2);
if(collision){
//output red
}else{
//output blue
}
}
}
The Camera::generateRay(float x, float y) is:
Camera::generateRay(float x, float y) {
//position = camera position, direction = camera direction etc.
Point2D xy = fromImageToPlaneSpace({ x, y });
Vector3D imagePoint = right * xy.getX() + up * xy.getY() + position + direction;
Vector3D rayDirection = imagePoint - position;
rayDirection.normalizeIt();
return Geometries::Ray(position, rayDirection);
}
Point2D fromImageToPlaneSpace(Point2D uv) {
float width = projectionPlaneSize.getX();
float height = projectionPlaneSize.getY();
float x = ((2 * uv.getX() - width) / width) * tan(fovX);
float y = ((2 * uv.getY() - height) / height) * tan(fovY);
return Point2D(x, y);
}
The fovs:
double fovX = 3.14159265359 / 4.0;
double fovY = projectionPlaneSize.getY() / projectionPlaneSize.getX() * fovX;
I get good result for 1:1 width:height aspect (e.g. 400x400):
But I get errors for e.g. 800x400:
Which is even slightly worse for bigger aspect ratios (like 1200x400):
What did I do wrong or which step did I omit?
Can it be a problem with precision or rather something with fromImageToPlaneSpace(...)?
Caveat: I spent 5 years at a video company, but I'm a little rusty.
Note: after writing this, I realized that pixel aspect ratio may not be your problem as the screen aspect ratio also appears to be wrong, so you can skip down a bit.
But, in video we were concerned with two different video sources: standard definition with a screen aspect ratio of 4:3 and high definition with a screen aspect ratio of 16:9.
But, there's also another variable/parameter: pixel aspect ratio. In standard definition, pixels are square and in hidef pixels are rectangular (or vice-versa--I can't remember).
Assuming your current calculations are correct for screen ratio, you may have to account for the pixel aspect ratio being different, either from camera source or the display you're using.
Both screen aspect ratio and pixel aspect ratio can be stored a .mp4, .jpeg, etc.
I downloaded your 1200x400 jpeg. I used ImageMagick on it to change only the pixel aspect ratio:
convert orig.jpg -resize 125x100%\! new.jpg
This says change the pixel aspect ratio (increase the width by 125% and leave the height the same). The \! means pixel vs screen ratio. The 125 is because I remember the rectangular pixel as 8x10. Anyway, you need to increase the horizontal width by 10/8 which is 1.25 or 125%
Needless to say this gave me circles instead of ovals.
Actually, I was able to get the same effect with adjusting the screen aspect ratio.
So, somewhere in your calculations, you're introducing a distortion of that factor. Where are you applying the scaling? How are the function calls different?
Where do you set the screen size/ratio? I don't think that's shown (e.g. I don't see anything like 1200 or 400 anywhere).
If I had to hazard a guess, you must account for aspect ratio in fromImageToPlaneSpace. Either width/height needs to be prescaled or the x = and/or y = lines need scaling factors. AFAICT, what you've got will only work for square geometry at present. To test, using the 1200x400 case, multiply the x by 125% [a kludge] and I bet you get something.
From the images, it looks like you have incorrectly defined the mapping from pixel coordinates to world coordinates and are introducing some stretch in the Y axis.
Skimming your code it looks like you are defining the camera's view frustum from the dimensions of the frame buffer. Therefore if you have a non-1:1 aspect ratio frame buffer, you have a camera whose view frustum is not 1:1. You will want to separate the model of the camera's view frustum from the image space dimension of the final frame buffer.
In other words, the frame buffer is the portion of the plane projected by the camera that we are viewing. The camera defines how the 3D space of the world is projected onto the camera plane.
Any basic book on 3D graphics will discuss viewing and projection.

opengl: avoid clipping in perspective mode?

It looks like the solution is to change projection matrix on-the-fly? Let me do some research to see how to do it correctly.
My scenario is:===>
Say, now, I created a 3D box in a window under windows7 with perspective mode enabled. From users point of view, when users move(rotate/translate) this box, when the box is out of the window, it should be clipped/(hidden partly), that's correct. But when the box is moved inside the window, the box should always be shown totally (not clipped!), right? But my problem is, sometime, when users move the box inside the window, he would see some parts of this box are clipped (for example, one vertex of this box is clipped away). There is no limit how much users can move this box.
My understanding is:===>
when users move the box, this box is out of frustum, that's why it's clipped.
In this case, my code should adjust the frustum on-the-fly (then, projection mattrix is changed) or adjust camera on-the-fly (maybe, adjust the near-far plane as well) or do something else?
My question is:===>
what's the popular technique to avoid this kind of clipping? And make sure users feel they are moving box smoothly, not having any "jerk" (like, suddenly, the box's location is jumped to another location (because our frustum is suddenly changed largely) when users are moving the box ).
I think this is a very classic problem, there should be a perfect solution. Any code/references are appreciated!
I attached a picture to show the problem:
This was happening to me , and adjusting the perspective matrix did not allow a near plane below .5 without all my objects disappearing.
Then I read this somewhere:
DEPTH CLAMPING. - The clipping behavior against the Z position of a vertex
( ie: -w_c \ le z_c \ le w_c ) can be turned off by activating depth clamping.
glEnable( GL_DEPTH_CLAMP ) ;
And I could get close to my objects without them being clipped away.
I do not know if doing this will cause other problems , but as of yet I have not encountered any.
I would suspect that your frustum is too narrow. So, when you rotate your object parts of it are moving outside of the viewable area. As an experiment, try increasing your frustum angle, increasing your Far value to something like 1000 or even 10000 and move your camera further back from centre (higher negative value on the z-plane). This should generate a very large frustum that your object should fit within. Run your project and rotate - if the clipping effect is gone you know your problem is either with the frustum or the model scale (or both).
This code gets called before every redraw. I don't know how you're rotating/translating (timer or mouseDown), but in any case the methods described below can be done smoothly and appear natural to the user.
If your object is being clipped by the near plane, move the near cutoff plane back toward the camera (in this code, increase VIEWPLANEOFFSET). If the camera is too close to allow you to move the near plane far enough back, you may also need to move the camera back.
If your object is being clipped by the left, right, top or bottom clipping planes, adjust the camera aperture.
This is discussed in more detail below.
// ******************************* Distance of The Camera from the Origin
cameraRadius = sqrtf((camera.viewPos.x * camera.viewPos.x) + (camera.viewPos.y * camera.viewPos.y) + (camera.viewPos.z * camera.viewPos.z));
GLfloat phi = atanf(camera.viewPos.x/cameraRadius);
GLfloat theta = atanf(camera.viewPos.y/cameraRadius);
camera.viewUp.x = cosf(theta) * sinf(phi);
camera.viewUp.y = cosf(theta);
camera.viewUp.z = sinf(theta) * sinf(phi);
You'll see with the View matrix we're only defining the camera (eye) position and view direction. There's no clipping going on here yet, but the camera position will limit what we can see in that if it's too close to the object, we'll be limited in how we can set the near cutoff plane. I can't think of any reason not to set the camera back fairly far.
// ********************************************** Make the View Matrix
viewMatrix = GLKMatrix4MakeLookAt(camera.viewPos.x, camera.viewPos.y, camera.viewPos.z, camera.viewPos.x + camera.viewDir.x, camera.viewPos.y + camera.viewDir.y, camera.viewPos.z + camera.viewDir.z, camera.viewUp.x, camera.viewUp.y, camera.viewUp.z);
The Projection matrix is where the clipping frustum is defined. Again, if the camera is too close, we won't be able to set the near cutoff plane to avoid clipping the object if it's bigger than our camera distance from the origin. While I can't see any reason not to set the camera back fairly far, there are reasons (accuracy of depth culling) not to set the near/far clipping planes any further apart than you need.
In this code the camera aperture is used directly, but if you're using something like glFrustum to create the Projection matrix, it's a good idea to calculate the left and right clipping planes from the camera aperture. This way you can create a zoom effect by varying the camera aperture (maybe in a mouseDown method) so the user can zoom in or out as he likes. Increasing the aperture effectively zooms out. Decreasing the aperture effectively zooms in.
// ********************************************** Make Projection Matrix
GLfloat aspectRatio;
GLfloat cameraNear, cameraFar;
// The Camera Near and Far Cutoff Planes
cameraNear = cameraRadius - VIEWPLANEOFFSET;
if (cameraNear < 0.00001)
cameraNear = 0.00001;
cameraFar = cameraRadius + VIEWPLANEOFFSET;
if (cameraFar < 1.0)
cameraFar = 1.0;
// Get The Current Frame
NSRect viewRect = [self frame];
camera.viewWidth = viewRect.size.width;
camera.viewHeight = viewRect.size.height;
// Calculate the Ratio of The View Width / View Height
aspectRatio = viewRect.size.width / viewRect.size.height;
float fieldOfView = GLKMathDegreesToRadians(camera.aperture);
projectionMatrix = GLKMatrix4MakePerspective(fieldOfView, aspectRatio, cameraNear, cameraFar);
EDIT:
Here is some code illustrating how to calculate left and right clipping planes from the camera aperture:
GLfloat ratio, apertureHalfAngle, width;
GLfloat cameraLeft, cameraRight, cameraTop, cameraBottom, cameraNear, cameraFar;
GLfloat shapeSize = 3.0;
GLfloat cameraRadius;
// Distance of The Camera from the Origin
cameraRadius = sqrtf((camera.viewPos.x * camera.viewPos.x) + (camera.viewPos.y * camera.viewPos.y) + (camera.viewPos.z * camera.viewPos.z));
// The Camera Near and Far Cutoff Planes
cameraNear = cameraRadius - (shapeSize * 0.5);
if (cameraNear < 0.00001)
cameraNear = 0.00001;
cameraFar = cameraRadius + (shapeSize * 0.5);
if (cameraFar < 1.0)
cameraFar = 1.0;
// Calculte the camera Aperture Half Angle (radians) from the Camera Aperture (degrees)
apertureHalfAngle = (camera.aperture / 2) * PI / 180.0; // half aperture degrees to radians
// Calculate the Width from 0 of the Left and Right Camera Cutoffs
// We Use Camera Radius Rather Than Camera Near For Our Own Reasons
width = cameraRadius * tanf(apertureHalfAngle);
NSRect viewRect = [self bounds];
camera.viewWidth = viewRect.size.width;
camera.viewHeight = viewRect.size.height;
// Calculate the Ratio of The View Width / View Height
ratio = camera.viewWidth / camera.viewHeight;
// Calculate the Camera Left, Right, Top and Bottom
if (ratio >= 1.0)
{
cameraLeft = -ratio * width;
cameraRight = ratio * width;
cameraTop = width;
cameraBottom = -width;
} else {
cameraLeft = -width;
cameraRight = width;
cameraTop = width / ratio;
cameraBottom = -width / ratio;
}