opengl: avoid clipping in perspective mode? - opengl

It looks like the solution is to change projection matrix on-the-fly? Let me do some research to see how to do it correctly.
My scenario is:===>
Say, now, I created a 3D box in a window under windows7 with perspective mode enabled. From users point of view, when users move(rotate/translate) this box, when the box is out of the window, it should be clipped/(hidden partly), that's correct. But when the box is moved inside the window, the box should always be shown totally (not clipped!), right? But my problem is, sometime, when users move the box inside the window, he would see some parts of this box are clipped (for example, one vertex of this box is clipped away). There is no limit how much users can move this box.
My understanding is:===>
when users move the box, this box is out of frustum, that's why it's clipped.
In this case, my code should adjust the frustum on-the-fly (then, projection mattrix is changed) or adjust camera on-the-fly (maybe, adjust the near-far plane as well) or do something else?
My question is:===>
what's the popular technique to avoid this kind of clipping? And make sure users feel they are moving box smoothly, not having any "jerk" (like, suddenly, the box's location is jumped to another location (because our frustum is suddenly changed largely) when users are moving the box ).
I think this is a very classic problem, there should be a perfect solution. Any code/references are appreciated!
I attached a picture to show the problem:

This was happening to me , and adjusting the perspective matrix did not allow a near plane below .5 without all my objects disappearing.
Then I read this somewhere:
DEPTH CLAMPING. - The clipping behavior against the Z position of a vertex
( ie: -w_c \ le z_c \ le w_c ) can be turned off by activating depth clamping.
glEnable( GL_DEPTH_CLAMP ) ;
And I could get close to my objects without them being clipped away.
I do not know if doing this will cause other problems , but as of yet I have not encountered any.

I would suspect that your frustum is too narrow. So, when you rotate your object parts of it are moving outside of the viewable area. As an experiment, try increasing your frustum angle, increasing your Far value to something like 1000 or even 10000 and move your camera further back from centre (higher negative value on the z-plane). This should generate a very large frustum that your object should fit within. Run your project and rotate - if the clipping effect is gone you know your problem is either with the frustum or the model scale (or both).

This code gets called before every redraw. I don't know how you're rotating/translating (timer or mouseDown), but in any case the methods described below can be done smoothly and appear natural to the user.
If your object is being clipped by the near plane, move the near cutoff plane back toward the camera (in this code, increase VIEWPLANEOFFSET). If the camera is too close to allow you to move the near plane far enough back, you may also need to move the camera back.
If your object is being clipped by the left, right, top or bottom clipping planes, adjust the camera aperture.
This is discussed in more detail below.
// ******************************* Distance of The Camera from the Origin
cameraRadius = sqrtf((camera.viewPos.x * camera.viewPos.x) + (camera.viewPos.y * camera.viewPos.y) + (camera.viewPos.z * camera.viewPos.z));
GLfloat phi = atanf(camera.viewPos.x/cameraRadius);
GLfloat theta = atanf(camera.viewPos.y/cameraRadius);
camera.viewUp.x = cosf(theta) * sinf(phi);
camera.viewUp.y = cosf(theta);
camera.viewUp.z = sinf(theta) * sinf(phi);
You'll see with the View matrix we're only defining the camera (eye) position and view direction. There's no clipping going on here yet, but the camera position will limit what we can see in that if it's too close to the object, we'll be limited in how we can set the near cutoff plane. I can't think of any reason not to set the camera back fairly far.
// ********************************************** Make the View Matrix
viewMatrix = GLKMatrix4MakeLookAt(camera.viewPos.x, camera.viewPos.y, camera.viewPos.z, camera.viewPos.x + camera.viewDir.x, camera.viewPos.y + camera.viewDir.y, camera.viewPos.z + camera.viewDir.z, camera.viewUp.x, camera.viewUp.y, camera.viewUp.z);
The Projection matrix is where the clipping frustum is defined. Again, if the camera is too close, we won't be able to set the near cutoff plane to avoid clipping the object if it's bigger than our camera distance from the origin. While I can't see any reason not to set the camera back fairly far, there are reasons (accuracy of depth culling) not to set the near/far clipping planes any further apart than you need.
In this code the camera aperture is used directly, but if you're using something like glFrustum to create the Projection matrix, it's a good idea to calculate the left and right clipping planes from the camera aperture. This way you can create a zoom effect by varying the camera aperture (maybe in a mouseDown method) so the user can zoom in or out as he likes. Increasing the aperture effectively zooms out. Decreasing the aperture effectively zooms in.
// ********************************************** Make Projection Matrix
GLfloat aspectRatio;
GLfloat cameraNear, cameraFar;
// The Camera Near and Far Cutoff Planes
cameraNear = cameraRadius - VIEWPLANEOFFSET;
if (cameraNear < 0.00001)
cameraNear = 0.00001;
cameraFar = cameraRadius + VIEWPLANEOFFSET;
if (cameraFar < 1.0)
cameraFar = 1.0;
// Get The Current Frame
NSRect viewRect = [self frame];
camera.viewWidth = viewRect.size.width;
camera.viewHeight = viewRect.size.height;
// Calculate the Ratio of The View Width / View Height
aspectRatio = viewRect.size.width / viewRect.size.height;
float fieldOfView = GLKMathDegreesToRadians(camera.aperture);
projectionMatrix = GLKMatrix4MakePerspective(fieldOfView, aspectRatio, cameraNear, cameraFar);
EDIT:
Here is some code illustrating how to calculate left and right clipping planes from the camera aperture:
GLfloat ratio, apertureHalfAngle, width;
GLfloat cameraLeft, cameraRight, cameraTop, cameraBottom, cameraNear, cameraFar;
GLfloat shapeSize = 3.0;
GLfloat cameraRadius;
// Distance of The Camera from the Origin
cameraRadius = sqrtf((camera.viewPos.x * camera.viewPos.x) + (camera.viewPos.y * camera.viewPos.y) + (camera.viewPos.z * camera.viewPos.z));
// The Camera Near and Far Cutoff Planes
cameraNear = cameraRadius - (shapeSize * 0.5);
if (cameraNear < 0.00001)
cameraNear = 0.00001;
cameraFar = cameraRadius + (shapeSize * 0.5);
if (cameraFar < 1.0)
cameraFar = 1.0;
// Calculte the camera Aperture Half Angle (radians) from the Camera Aperture (degrees)
apertureHalfAngle = (camera.aperture / 2) * PI / 180.0; // half aperture degrees to radians
// Calculate the Width from 0 of the Left and Right Camera Cutoffs
// We Use Camera Radius Rather Than Camera Near For Our Own Reasons
width = cameraRadius * tanf(apertureHalfAngle);
NSRect viewRect = [self bounds];
camera.viewWidth = viewRect.size.width;
camera.viewHeight = viewRect.size.height;
// Calculate the Ratio of The View Width / View Height
ratio = camera.viewWidth / camera.viewHeight;
// Calculate the Camera Left, Right, Top and Bottom
if (ratio >= 1.0)
{
cameraLeft = -ratio * width;
cameraRight = ratio * width;
cameraTop = width;
cameraBottom = -width;
} else {
cameraLeft = -width;
cameraRight = width;
cameraTop = width / ratio;
cameraBottom = -width / ratio;
}

Related

Why is my Ray Tracer getting so much edge distortion?

I am writing a ray tracer from scratch. The example is rendering two spheres using ray-sphere intersection detection. When the spheres are close to the center of the screen, they look fine. However, when I move the camera, or if I adjust the spheres position so they are closer to the edge, they become distorted.
This is the ray casting code:
void Renderer::RenderThread(int start, int span)
{
// pCamera holds the position, rotation, and fov of the camera
// pRenderTarget is the screen to render to
// calculate the camera space to world space matrix
Mat4 camSpaceMatrix = Mat4::Get3DTranslation(pCamera->position.x, pCamera->position.y, pCamera->position.z) *
Mat4::GetRotation(pCamera->rotation.x, pCamera->rotation.y, pCamera->rotation.z);
// use the cameras origin as the rays origin
Vec3 origin(0, 0, 0);
origin = (camSpaceMatrix * origin.Vec4()).Vec3();
// this for loop loops over all the pixels on the screen
for ( int p = start; p < start + span; ++p ) {
// get the pixel coordinates on the screen
int px = p % pRenderTarget->GetWidth();
int py = p / pRenderTarget->GetWidth();
// in ray tracing, ndc space is from [0, 1]
Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());
// in ray tracing, screen space is [-1, 1]
Vec2 screen(2 * ndc.x - 1, 1 - 2 * ndc.y);
// scale x by aspect ratio
screen.x *= (float)pRenderTarget->GetWidth() / pRenderTarget->GetHeight();
// scale screen by the field of view
// fov is currently set to 90
screen *= tan((pCamera->fov / 2) * (PI / 180));
// screen point is the pixels point in camera space,
// give a z value of -1
Vec3 camSpace(screen.x, screen.y, -1);
camSpace = (camSpaceMatrix * camSpace.Vec4()).Vec3();
// the rays direction is its point on the cameras viewing plane
// minus the cameras origin
Vec3 dir = (camSpace - origin).Normalized();
Ray ray = { origin, dir };
// find where the ray intersects with the spheres
// using ray-sphere intersection algorithm
Vec4 color = TraceRay(ray);
pRenderTarget->PutPixel(px, py, color);
}
}
The FOV is set to 90. I have seen where other people have had this problem but it was because they were using a very high FOV value. I don't think there should be issues with 90. This issue persists even if the camera is not moved at all. Any object close to the edge of the screen appears distorted.
When in doubt, you can always check out what other renderers are doing. I always compare my results and settings to Blender. Blender 2.82, for example, has a default field of view of 39.6 degrees.
I also feel inclined to point out that this is wrong:
Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());
If you want to get the center of the pixel, then it should be 0.5f:
Vec2 ndc((px + 0.5f) / pRenderTarget->GetWidth(), (py + 0.5f) / pRenderTarget->GetHeight());
Also, and this is really a nit-picky kind of thing, your intervals are open intervals and not closed ones (as you mentioned in the source comments.) The image plane coordinates never reach 0 or 1 and your camera space coordinates are never fully -1 or 1. Eventually, then the image plane coordinates are converted to pixel coordinates, it is left-closed interval [0, width) and [0, height).
Good luck on your ray tracer!

Arcball Camera: how to obtain right, direction and up

I'm trying to implement a camera that follows a moving object. I've implemented these functions:
void Camera::espheric_yaw(float degrees, glm::vec3 center_point)
{
float lim_yaw = glm::radians(89.0f);
float radians = glm::radians(degrees);
absoluteYaw += radians;
... clamp absoluteYaw
float radius = 10.0f;
float camX = cos(absoluteYaw) * cos(absoluteRoll) * radius;
float camY = sin(absoluteRoll)* radius;
float camZ = sin(absoluteYaw) * cos(absoluteRoll) * radius;
eyes.x = camX;
eyes.y = camY;
eyes.z = camZ;
lookAt = center_point;
view = glm::normalize(lookAt - eyes);
up = glm::vec3(0, 1, 0);
right = glm::normalize(glm::cross(view, up));
}
I want to use this function (and the pitch version) for a camera that follows a moving 3d model. Right now, it works when the center_point is the (0,1,0). I think i'm getting the position right but the up vector is clearly not always (0,1,0).
How can I get my up, view and right vector for the camera? And then, if I update the eyes position of the camera this way, how will my camera move when the other object (centered at center_position parameter) moves?
The idea is to update this each time I have mouse input with centered_value = center of the moving object. Then use gluLookAt with view, eyes and up values of my camera (and lookAt which will be eyes+view).
Following a moving object is matter of pointing the camera to that object. This is what typical lookAt function does. See the maths here and then use glm::lookAt().
The 'Arcball' technic is for rotating with the mouse. See some maths here.
The idea is to get two vectors (first, second) from positions on screen. For each vector, X,Y are taking depending on pixels "travelled" by mouse and the size of the window. Z is calculated by 'trackball' maths. With these two vectors (after normalizing them), its cross product gives the axis of rotation in camera coordinates, and its dot product gives the angle. Now, you can rotate the camera by glm::rotate()
If you go another route (e.g. calculating camera matrix on your own), then the "up" direction of the camera must be updated by yourself. Remember it's perpendicular to the other two axis of the camera.

Switching from perspective to orthogonal keeping the same view size of model and zooming

I have fov angle = 60, width = 640 and height = 480 of window, near = 0.01 and far = 100 planes and I get projection matrix using glm::perspective()
glm::perspective(glm::radians(fov),
width / height,
zNear,
zFar);
It works well.
Then I want to change projection type to orthogonal, but I don't knhow how to compute input parameters of glm::ortho() properly.
I've tried many ways, but problem is after switching to orthographic projection size of model object become another.
Let I have a cube with center in (0.5, 0.5, 0.5) and length size 1, and camera with mEye in (0.5, 0.5, 3), mTarget in (0.5, 0.5, 0.5) and mUp (0, 1, 0). View matrix is glm::lookAt(mEye, mTarget, mUp)
With perspective projection it works well. With glm::ortho(-width, width, -height, height, zNear, zFar) my cube became a small pixel in the center of window.
Also I've tried implement this variant How to switch between Perspective and Orthographic cameras keeping size of desired object
but result is (almost) same as before.
So, first question is how to compute ortho parameters for saving original view size of object/position of camera?
Also, zooming with
auto distance = glm::length(mTarget - mEye)
mEye = mTarget - glm::normalize(mTarget - mEye) * distance;
have no effect with ortho. Thus second question is how to implement zooming in case of ortho projection?
P.s.
I assume I understand ortho correctly. Proportions of model doesn't depends on depth, but nevertheless I still can decide where camera is for setting size of model properly and using zoom. Also I assume it is simple and trivial task, for example, when developing a 3D-viewer/editor/etc. Correct me if it is not.
how to compute ortho parameters for saving original view size of object/position of camera?
At orthographic projection the 3 dimensional scene is parallel projection to the 2 dimensional viewport.
This means that the objects, which are projected on the viewport always have the same size, independent of their depth (distance to the camera).
The perspective projection describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
This means an object which is projected on the viewport becomes smaller, by its depth.
If you switch form perspective to orthographic projection only the objects in 1 plane, which is planar (parallel) to the viepwort, and keeps its depth. Note, a plane is 2 dimensional and has no "depth". This cause that a 3 dimensional object never can "look" the same, when the projection is switched. But a 2 dimensional billboard can keep it's size.
The ration of depth an size at perspective projection is linear and can be calculated. It depends on the field of view angle only:
float ratio_size_per_depth = atan(glm::radians(fov / 2.0f) * 2.0f;
If you want to set up an orthographic projection, which keeps the size for a certain distance (depth) then you have to define the depth first:
e.g. Distance to the target point:
auto distance = glm::length(mTarget - mEye);
the projection can be set up like this:
float aspect = width / height
float size_y = ratio_size_per_depth * distance;
float size_x = ratio_size_per_depth * distance * aspect;
glm::mat4 orthProject = glm::ortho(-size_x, size_x, -size_y, size_y, 0.0f, 2.0f*distance);
how to implement zooming in case of ortho projection?
Scale the XY components of the orthographic projection:
glm::mat4 orthProject = glm::ortho(-size_x, size_x, -size_y, size_y, 0.0f, 2.0f*distance);
float orthScale = 2.0f;
orthProject = glm::scale(orthProject, glm::vec3(orthScale, orthScale, 1.0f));
Set a value for orthScale which is > 1.0 for zoom in and a value which is < 1.0 for zoom out.

Rotating 2D camera to space ship's heading in OpenGL (OpenTK)

The game is a top-down 2D space ship game -- think of "Asteroids."
Box2Dx is the physics engine and I extended the included DebugDraw, based on OpenTK, to draw additional game objects. Moving the camera so it's always centered on the player's ship and zooming in and out work perfectly. However, I really need the camera to rotate along with the ship so it's always facing in the same direction. That is, the ship will appear to be frozen in the center of the screen and the rest of the game world rotates around it as it turns.
I've tried adapting code samples, but nothing works. The best I've been able to achieve is a skewed and cut-off rendering.
Render loop:
// Clear.
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
// other rendering omitted (planets, ships, etc.)
this.OpenGlControl.Draw();
Update view -- centers on ship and should rotate to match its angle. For now, I'm just trying to rotate it by an arbitrary angle for a proof of concept, but no dice:
public void RefreshView()
{
int width = this.OpenGlControl.Width;
int height = this.OpenGlControl.Height;
Gl.glViewport(0, 0, width, height);
Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
float ratio = (float)width / (float)height;
Vec2 extents = new Vec2(ratio * 25.0f, 25.0f);
extents *= viewZoom;
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0, 0, 0);
Vec2 lower = this.viewCenter - extents;
Vec2 upper = this.viewCenter + extents;
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
}
Now, I'm obviously doing this wrong. Degrees of 0 and 180 will keep it right-side-up or flip it, but any other degree will actually zoom it in/out or result in only blackness, nothing rendered. Below are examples:
If ship angle is 0.0f, then game world is as expected:
Degree of 180.0f flips it vertically... seems promising:
Degree of 45 zooms out and doesn't rotate at all... that's odd:
Degree of 90 returns all black. In case you've never seen black:
Please help!
Firstly the 2-4 arguments are the axis, so please state them correctly as stated by #pingul.
More importantly the rotation is applied to the projection matrix.
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
In this line your Orthogonal 2D projection matrix is being multiplied with the previous rotation and applied to your projection matrix. Which I believe is not what you want.
The solution would be move your rotation call to a place after the model view matrix mode is selected, as below
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0.0f, 0.0f, 1.0f);
And now your rotations will be applied to the model-view matrix stack. (I believe this is the effect you want). Keep in mind that glRotatef() creates a rotation matrix and multiplies it with the matrix at the top of the selected stack stack.
I would also strongly suggest you move away from fixed function pipeline if possible as suggested by #BDL.

Wrong aspect ratio calculations for camera (simple ray-caster)

I am working on some really simple ray-tracer.
For now I am trying to make the perspective camera works properly.
I use such loop to render the scene (with just two, hard-coded spheres - I cast ray for each pixel from its center, no AA applied):
Camera * camera = new PerspectiveCamera({ 0.0f, 0.0f, 0.0f }/*pos*/,
{ 0.0f, 0.0f, 1.0f }/*direction*/, { 0.0f, 1.0f, 0.0f }/*up*/,
buffer->getSize() /*projectionPlaneSize*/);
Sphere * sphere1 = new Sphere({ 300.0f, 50.0f, 1000.0f }, 100.0f); //center, radius
Sphere * sphere2 = new Sphere({ 100.0f, 50.0f, 1000.0f }, 50.0f);
for(int i = 0; i < buffer->getSize().getX(); i++) {
for(int j = 0; j < buffer->getSize().getY(); j++) {
//for each pixel of buffer (image)
double centerX = i + 0.5;
double centerY = j + 0.5;
Geometries::Ray ray = camera->generateRay(centerX, centerY);
Collision * collision = ray.testCollision(sphere1, sphere2);
if(collision){
//output red
}else{
//output blue
}
}
}
The Camera::generateRay(float x, float y) is:
Camera::generateRay(float x, float y) {
//position = camera position, direction = camera direction etc.
Point2D xy = fromImageToPlaneSpace({ x, y });
Vector3D imagePoint = right * xy.getX() + up * xy.getY() + position + direction;
Vector3D rayDirection = imagePoint - position;
rayDirection.normalizeIt();
return Geometries::Ray(position, rayDirection);
}
Point2D fromImageToPlaneSpace(Point2D uv) {
float width = projectionPlaneSize.getX();
float height = projectionPlaneSize.getY();
float x = ((2 * uv.getX() - width) / width) * tan(fovX);
float y = ((2 * uv.getY() - height) / height) * tan(fovY);
return Point2D(x, y);
}
The fovs:
double fovX = 3.14159265359 / 4.0;
double fovY = projectionPlaneSize.getY() / projectionPlaneSize.getX() * fovX;
I get good result for 1:1 width:height aspect (e.g. 400x400):
But I get errors for e.g. 800x400:
Which is even slightly worse for bigger aspect ratios (like 1200x400):
What did I do wrong or which step did I omit?
Can it be a problem with precision or rather something with fromImageToPlaneSpace(...)?
Caveat: I spent 5 years at a video company, but I'm a little rusty.
Note: after writing this, I realized that pixel aspect ratio may not be your problem as the screen aspect ratio also appears to be wrong, so you can skip down a bit.
But, in video we were concerned with two different video sources: standard definition with a screen aspect ratio of 4:3 and high definition with a screen aspect ratio of 16:9.
But, there's also another variable/parameter: pixel aspect ratio. In standard definition, pixels are square and in hidef pixels are rectangular (or vice-versa--I can't remember).
Assuming your current calculations are correct for screen ratio, you may have to account for the pixel aspect ratio being different, either from camera source or the display you're using.
Both screen aspect ratio and pixel aspect ratio can be stored a .mp4, .jpeg, etc.
I downloaded your 1200x400 jpeg. I used ImageMagick on it to change only the pixel aspect ratio:
convert orig.jpg -resize 125x100%\! new.jpg
This says change the pixel aspect ratio (increase the width by 125% and leave the height the same). The \! means pixel vs screen ratio. The 125 is because I remember the rectangular pixel as 8x10. Anyway, you need to increase the horizontal width by 10/8 which is 1.25 or 125%
Needless to say this gave me circles instead of ovals.
Actually, I was able to get the same effect with adjusting the screen aspect ratio.
So, somewhere in your calculations, you're introducing a distortion of that factor. Where are you applying the scaling? How are the function calls different?
Where do you set the screen size/ratio? I don't think that's shown (e.g. I don't see anything like 1200 or 400 anywhere).
If I had to hazard a guess, you must account for aspect ratio in fromImageToPlaneSpace. Either width/height needs to be prescaled or the x = and/or y = lines need scaling factors. AFAICT, what you've got will only work for square geometry at present. To test, using the 1200x400 case, multiply the x by 125% [a kludge] and I bet you get something.
From the images, it looks like you have incorrectly defined the mapping from pixel coordinates to world coordinates and are introducing some stretch in the Y axis.
Skimming your code it looks like you are defining the camera's view frustum from the dimensions of the frame buffer. Therefore if you have a non-1:1 aspect ratio frame buffer, you have a camera whose view frustum is not 1:1. You will want to separate the model of the camera's view frustum from the image space dimension of the final frame buffer.
In other words, the frame buffer is the portion of the plane projected by the camera that we are viewing. The camera defines how the 3D space of the world is projected onto the camera plane.
Any basic book on 3D graphics will discuss viewing and projection.