I have a 3D scene with a perspective projection.
I want to fit the scene to the screen based on a bounding box (min and max).
I have centered my scene like this:
glm::vec3 center = (min + max) / 2.0f;
rootNode->translate(-center.x, -center.y, -center.z);
Now I need a scale factor to scale my rootNode to fit the screen.
How do I do this?
(this: 8.070 How can I automatically calculate a view that displays my entire model? (I know the bounding sphere and up vector.) does not help because its based on a orthogonal projection)
The reason this question is harder with a perspective projection than it is with an orthogonal projection is that the min and max you need are not constant with a perspective projection.
With a perspective projection the distance between either edge of the visible region increases as you move away from the camera.
With a perspective projection you typically have a field of view angle, theta, a camera position, and a "looking at" vector, v. At any distance, d from the camera's position (in the direction of v) you can imagine a plane whose normal is v. The region of this plane that your camera can "see" has width:
2 * d * tan(theta / 2).
In a simple fixed camera setup you might have your camera at the origin and looking down the z-axis, and then the distance d for any point will just be the point's z coordinate.
Note also that you may have different horizontal and vertical field of view angles. If you have set a vertical field of view angle "fovy" and an aspect ratio (viewport width / viewport height) then your horizontal field of view angle is your vertical field of view angle times the aspect ratio.
Related
I have a simple OpenGL space with the frustum angle at 60 degrees (horizontal), Z_near 0.1 and Z_Far 100.
I also have a square (just 2 triangles) that is always oriented perpendicular to the camera. In fact I just don't apply the view or projection transform to it - only model matrix transform, which ensures that it's always in front of the camera (I want to only change Z coord to get it away from the camera).
The square dimensions are 1.0 x 1.0
the viewport is 1024x768 pixels
How do I calculate the distance it needs to go from the camera, so the square takes exactly 0.25 the width of the screen?
I am using orthographic projection glOrtho for my scene. I implemented a virtual trackball to rotate an object beside that I also implemented a zoom in/out on the view matrix. Say I have a cube of size 100 unit and is located at the position of (0,-40000,0) far from the origin. If the center of rotation is located at the origin once the user rotate the cube and after zoom in or out, it could be position at some where (0,0,2500000) (this position is just an assumption and it is calculated after multiplied by the view matrix). Currently I define a very big range of near(-150000) and far(150000) plane, but some time the object still lie outside either the near or far plane and the object just turn invisible, if I define a larger near and far clipping plane say -1000000 and 1000000, it will produce an ungly z artifacts. So my question is how do I correctly calculate the near and far plane when user rotate the object in real time? Thanks in advance!
Update:
I have implemented a bounding sphere for the cube. I use the inverse of view matrix to calculate the camera position and calculate the distance of the camera position from the center of the bounding sphere (the center of the bounding sphere is transformed by the view matrix). But I couldn't get it to work. can you further explain what is the relationship between the camera position and the near plane?
A simple way is using the "bounding sphere". If you know the data bounding box, the maximum diagonal length is the diameter of the bounding sphere.
Let's say you calculate the distance 'dCC' from the camera position to the center of the sphere. Let 'r' the radius of that sphere. Then:
Near = dCC - r - smallMargin
Far = dCC + r + smallMargin
'smallMargin' is a value used just to avoid clipping points on the surface of the sphere due to numerical precision issues.
The center of the sphere should be the center of rotation. If not, the diameter should grow so as to cover all data.
Well the thing is, that I wan't to picture mazes with a different width and height. I'm drawing them in units and my question is, how can I get the plane viewable dimensions in units that I would know how deep inside the screen I would have to draw my maze in order it would be fully seeable. For perspective view I use "::gluPerspective(45.0f, (GLfloat)width / (GLfloat)height, 1.0f, 100.0f);"
For example how I get the near plane dimensions(width and height) in OpenGL units or the far plane or any plane between those planes. If I want to picture something entirely seeable I need to know the plane dimensions in OpenGL units or is there another way?
A bit of trigonometry will tell you that: h_near = 2*near*tan(fovy/2) and the same for far: h_far = 2*far*tan(fovy/2)
Then, the ratio will give you the width.
For the "proof", just consider the right-angled triangle formed by the line of view, the vertical of the plane of rendering and back. The length of the line of view is near or far (depending), the angle at the eye position is fovy/2 (i.e. half the view angle) and the vertical on the plane is h_near/2 or h_far/2, as we only get half-way to the plane. Then, the tangent of the angle on a right-angled triangle is equal to the far-side divided by the near-side ...
I am trying to implement a raytracer that uses an arbitrary camera position and perspective projection. I have the camera position, the look at position, the angle of field of view, but I cannot figure out the direction I have to shoot the rays so that each ray corresponds to a pixel. If I could find a way to find the coordinates of the image plane, or the direction vectors the rays should have, it would be downhill from there. Any help is appreciated.
I would do the following: imagine that there is a rectangular grid just in front of your eye. The grid is defined by one point (the (0;0) point of the grid) and two (three dimensional) base vectors (x,y); with this you can calculate a ray as (origin + Xcoordinate * x + Ycoordinate * y) - eye. By adjusting the distance between your eye point, and origin; or by adjusting the length of the base vectors you could get the desired angle of view.
Heyo,
I'm currently working on a project where I need to place the camera such that the full motion of a character would be viewable without moving the camera. I have the position where the character starts, as well as the maximum distance that the character will travel in all three directions (X,Y, & Z). I also have the field of view (which is 90 degrees).
Is there an equation that'll figure out where I need to place the camera so it won't have to move to see the full motion?
Note: this is using OpenGL.
Clarification: The camera should be "in front" of the character that's in the motion, not above.
It'll also be moving along a ground plane.
If you make a bounding sphere of the points, all you need to do is keep the camera at a distance greater than or equal to the radius of the bounding sphere / sin(FOV/2).
For example, if you have a bounding sphere with radius Radius, and a specified Field of View FOV, your camera just needs to be at a point "Dist" away, pointing towards the center of the bounding sphere.
The equation for calculating the distance is:
Dist = Radius / sin( FOV/2 );
This will work in 3D, for a camera at any orientation.
Simply having the maximum range of (X, Y, Z) is not on its own sufficient, because the viewing port is essentially pyramid shaped, with the apex of the pyramid being at the eye position.
For the sake of argument, let's assume that all movement is in the (X, Z) plane (i.e. the ground), and the eye is directly above the origin 10m along the Y axis.
Assuming a square viewport, with your 90˚ field of view you'd be able to see from ±10m along both the X and Z axis, but only for objects who are on the ground (Y = 0). As soon as they come off the ground your view is reduced. If it's 1m of the ground then your (X, Z) extent is only ±9m.
Clearly a real camera could be placed anyway in the scene, facing any direction. Even the "roll" angle of the camera could change how much is visible. There are actually infinitely many such camera points, so you will need to constrain your criteria somewhat.
Take the line segment from the startpoint to the endpoint. Construct a plane orthogonal to this line segment through the midpoint of the line segment. Then position the camera somewhere in this plane at an distance of more than the following from the intersection point of plane and line looking at the intersection point. The up vector of the camera must be in the plane and the horizontal field of view must be 90 degrees.
distance = sqrt(dx^2 + dy^2 + dz^2) / 2
This camera positions will all have the startpoint and the endpoint on the left or right border of the view port and verticaly centered.
Another solution might be to write a function that takes the startpoint, the endpoint, and the desired position of both points on the screen. Then just solve the projection equation for the camera transformation.
It depends, for example, if the object is gonna move in a plane, you can just place the camera outside a ball circumscribed its movement area (this depends on the fact that FOV is 90, which is a fortunate angle).
If the object is gonna move in 3D, it's much more difficult. It would help if you'd specify the region where the object moves (cube vs. ball...) and the direction you want to see it from.