3d Camera Position given some points - opengl

Heyo,
I'm currently working on a project where I need to place the camera such that the full motion of a character would be viewable without moving the camera. I have the position where the character starts, as well as the maximum distance that the character will travel in all three directions (X,Y, & Z). I also have the field of view (which is 90 degrees).
Is there an equation that'll figure out where I need to place the camera so it won't have to move to see the full motion?
Note: this is using OpenGL.
Clarification: The camera should be "in front" of the character that's in the motion, not above.
It'll also be moving along a ground plane.

If you make a bounding sphere of the points, all you need to do is keep the camera at a distance greater than or equal to the radius of the bounding sphere / sin(FOV/2).
For example, if you have a bounding sphere with radius Radius, and a specified Field of View FOV, your camera just needs to be at a point "Dist" away, pointing towards the center of the bounding sphere.
The equation for calculating the distance is:
Dist = Radius / sin( FOV/2 );
This will work in 3D, for a camera at any orientation.

Simply having the maximum range of (X, Y, Z) is not on its own sufficient, because the viewing port is essentially pyramid shaped, with the apex of the pyramid being at the eye position.
For the sake of argument, let's assume that all movement is in the (X, Z) plane (i.e. the ground), and the eye is directly above the origin 10m along the Y axis.
Assuming a square viewport, with your 90˚ field of view you'd be able to see from ±10m along both the X and Z axis, but only for objects who are on the ground (Y = 0). As soon as they come off the ground your view is reduced. If it's 1m of the ground then your (X, Z) extent is only ±9m.
Clearly a real camera could be placed anyway in the scene, facing any direction. Even the "roll" angle of the camera could change how much is visible. There are actually infinitely many such camera points, so you will need to constrain your criteria somewhat.

Take the line segment from the startpoint to the endpoint. Construct a plane orthogonal to this line segment through the midpoint of the line segment. Then position the camera somewhere in this plane at an distance of more than the following from the intersection point of plane and line looking at the intersection point. The up vector of the camera must be in the plane and the horizontal field of view must be 90 degrees.
distance = sqrt(dx^2 + dy^2 + dz^2) / 2
This camera positions will all have the startpoint and the endpoint on the left or right border of the view port and verticaly centered.
Another solution might be to write a function that takes the startpoint, the endpoint, and the desired position of both points on the screen. Then just solve the projection equation for the camera transformation.

It depends, for example, if the object is gonna move in a plane, you can just place the camera outside a ball circumscribed its movement area (this depends on the fact that FOV is 90, which is a fortunate angle).
If the object is gonna move in 3D, it's much more difficult. It would help if you'd specify the region where the object moves (cube vs. ball...) and the direction you want to see it from.

Related

What are we trying to achieve by creating a right and down vectors in this ray tracing depiction?

Often, we see the following picture when talking about ray tracing.
Here, I see the Z axis as the sort of direction if the camera pointed straight ahead, and the XY grid as the grid that the camera is seeing here. From the camera's point of view, we see the usual Cartesian grid me and my classmates are used to.
Recently I was examining code that simulates this. One thing that is not obvious from this picture to me is the requirement for the "right" and "down" vectors. Obviously we have look_at, which shows where the camera is looking. And campos is where the camera is located. But why do we need camright and camdown? What are we trying to construct?
Vect X (1, 0, 0);
Vect Y (0, 1, 0);
Vect Z (0, 0, 1);
Vect campos (3, 1.5, -4);
Vect look_at (0, 0, 0);
Vect diff_btw (
campos.getVectX() - look_at.getVectX(),
campos.getVectY() - look_at.getVectY(),
campos.getVectZ() - look_at.getVectZ()
);
Vect camdir = diff_btw.negative().normalize();
Vect camright = Y.crossProduct(camdir);
Vect camdown = camright.crossProduct(camdir);
Camera scene_cam (campos, camdir, camright, camdown);
I was searching about this question recently and found this post as well: Setting the up vector in the camera setup in a ray tracer
Here, the answerer says this: "My answer assumes that the XY plane is the "ground" in world space and that Z is the altitude. Imagine standing on the floor holding a camera. It's position has a positive Z component, and it's view vector is nearly parallel to the ground. (In my diagram, it's pointing slightly upwards.) The plane of the camera's film (the uv grid) is perpendicular to the view grid. A common approach is to transform everything until the film plane coincides with the XY plane and the camera points along the Z axis. That's equivalent, but not what I'm describing here."
I'm not entirely sure why "transformations" are necessary.. How is this point of view different from the picture at the top? Here they also say that they need an "up" vector and "right" vector to "construct an image plane". I'm not sure what an image plane is..
Could someone explain better the relationship between the physical representation and code representation?
How do you know that you always want the camera's "up" to be aligned with the vertical lines in the grid in your image?
Trying to explain it another way: The grid is not really there. That imaginary grid is the result of the calculations of camera's directional vectors and the resolution you are rendering in. The grid is not what decides the camera angle.
When you are holding a physical camera in your hand, like the camera in the cell phone, don't you ever rotate the camera little bit for effect? Or when filming, you may want to slowly rotate the camera? Have you not seen any movies where the camera is rotated?
In the same way, you may want to rotate the "camera" in your ray traced world. And rotating only the camera is much easier than rotating all your objects in the scene(may be millions!)
Check out the example of rotating the camera from the movie Ice Age here:
https://youtu.be/22qniGtZhZ8?t=61
The (up or down) and right vectors constructs the plane you project the scene onto. Since the scene is in 3D you need to project the scene onto a 2D scene in order to render a picture to display on your screen.
If you have the camera position and direction you still don't know whether you're holding the camera right-side up, upside down, or tilted to the left and right.
Using camera position, lookat, up (down) and right vectors we can uniquely define the 3D scene is projected into a 2D picture.
Concretely, if you look at the code and the picture. The 3D scene are the objects displayed. The image/projection plane is the grid infront of the camera. It's orientation is defined by the the camright and camdir vectors (because we are assuming the cameras line of sight is perpendicular to camdir, camdown is uniquely defined by the other two).
The placement of the grid is based on the camera's position and intrinsic properties (it's not being displayed here, but the camera will have a specific field of view).

How to calculate near and far plane for glOrtho in OpenGL

I am using orthographic projection glOrtho for my scene. I implemented a virtual trackball to rotate an object beside that I also implemented a zoom in/out on the view matrix. Say I have a cube of size 100 unit and is located at the position of (0,-40000,0) far from the origin. If the center of rotation is located at the origin once the user rotate the cube and after zoom in or out, it could be position at some where (0,0,2500000) (this position is just an assumption and it is calculated after multiplied by the view matrix). Currently I define a very big range of near(-150000) and far(150000) plane, but some time the object still lie outside either the near or far plane and the object just turn invisible, if I define a larger near and far clipping plane say -1000000 and 1000000, it will produce an ungly z artifacts. So my question is how do I correctly calculate the near and far plane when user rotate the object in real time? Thanks in advance!
Update:
I have implemented a bounding sphere for the cube. I use the inverse of view matrix to calculate the camera position and calculate the distance of the camera position from the center of the bounding sphere (the center of the bounding sphere is transformed by the view matrix). But I couldn't get it to work. can you further explain what is the relationship between the camera position and the near plane?
A simple way is using the "bounding sphere". If you know the data bounding box, the maximum diagonal length is the diameter of the bounding sphere.
Let's say you calculate the distance 'dCC' from the camera position to the center of the sphere. Let 'r' the radius of that sphere. Then:
Near = dCC - r - smallMargin
Far = dCC + r + smallMargin
'smallMargin' is a value used just to avoid clipping points on the surface of the sphere due to numerical precision issues.
The center of the sphere should be the center of rotation. If not, the diameter should grow so as to cover all data.

Orientation of figures in space

I have a sphere in my program and I intend to draw some rectangles over at a distance x from the centre of this sphere. The figure looks something below:
The rectangles are drawn at (x,y,z) points that I have already have in a vector of 3d points.
Let's say the distance x from centre is 10. Notice the orientation of these rectangles and these are tangential to an imaginary sphere of radius 10 (perpendicular to an imaginary line from the centre of sphere to the centre of rectangle)
Currently, I do something like the following:
For n points vector<vec3f> pointsInSpace where the rectnagles have to be plotted
for(int i=0;i<pointsInSpace.size();++i){
//draw rectnagle at (x,y,z)
}
which does not have this kind of tangential orientation that I am looking for.
It looked to me of applying roll,pitch,yaw rotations for each of these rectangles and using quaternions somehow to make them tangential as to what I am looking for.
However, it looked a bit complex to me and I wanted to ask about some better method to do this.
Also, the rectangle in future might change to some other shape, so a kind of generic solution would be appreciated.
I think you essentially want the same transformation as would be accomplished with a LookAt() function (you want the rectangle to 'look at' the sphere, along a vector from the rectangle's center, to the sphere's origin).
If your rectangle is formed of the points:
(-1, -1, 0)
(-1, 1, 0)
( 1, -1, 0)
( 1, 1, 0)
Then the rectangle's normal will be pointing along Z. This axis needs to be oriented towards the sphere.
So the normalised vector from your point to the center of the sphere is the Z-axis.
Then you need to define a distinct 'up' vector - (0,1,0) is typical, but you will need to choose a different one in cases where the Z-axis is pointing in the same direction.
The cross of the 'up' and 'z' axes gives the x axis, and then the cross of the 'x' and 'z' axes gives the 'y' axis.
These three axes (x,y,z) directly form a rotation matrix.
This resulting transformation matrix will orient the rectangle appropriately. Either use GL's fixed function pipeline (yuk), in which case you can just use gluLookAt(), or build and use the matrix above in whatever fashion is appropriate in your own code.
Personally I think the answer of JasonD is enough. But here is some info of the calculation involved.
Mathematically speaking this is a rather simple problem, What you have is a 2 known vectors. You know the position vector and the spheres normal vector. Since the square can be rotated arbitrarily along around the vector from center of your sphere you need to define one more vector, the up vector. Without defining up vector it becomes a impossible solution.
Once you define a up vector vector, the problem becomes simple. Assuming your square is on the XY-plane as JasonD suggest above. Then your matrix becomes:
up_dot_n_dot_n.X up_dot_n_dot_n.Y up_dot_n_dot_n.Z 0
n.X n.y n.z 0
up_dot_n.x up_dot_n.x up_dot_n.z 0
p.x p.y p.z 1
Where n is the normal unit vector of p - center of sphere (which is trivial if sphere is in the center of the coordinate system), up is a arbitrary unit vector vector. The p follows form definition and is the position.
The solution has a bit of a singularity at the up direction of the sphere. An alternate solution is to rotate first 360 around up, the 180 around rotated axis dot up. Produces same thing different approach no singularity problem.

Circle that moves on the edge of a circle

As the title describes, I want to make a tiny circle that circulates on the edge of the sector of the another big circle. I have implemented sector of the circle, now only issue here is how to make small circle circulate on the edge of this sector. I have tried various ways, however, none of them was proved to be successful, therefore I plead you to give me some tips of how to implement it.
Thanks in advance.
You just have to consider that, for a circle of radius 1 centered on the origin, every point on the circle can be described as:
P = [sin(alpha); cos(alpha)]
With 0<=alpha<2*pi
Now, if you change the radius and the center you will have:
P = [(radius * sin(alpha))+x_center; (radius*cos(alpha))+y_center]
So, just have a loop for alpha going from 0 to 2*pi (or whatever section of circle you need) and use the above equation to calculate the position of the center of the small circle.
I presume you have a a function that can draw a circle at a given position in cartesian co-ordinates and radius.
Use polar co-ordinates (angle / radius), set the radius to the radius of the big circle minus the small circle. Set the angle to wherever you want to start the circle. Then set a loop up to increment the angle by a given amount. After each increment, clear the screen, draw the big circle. Then convert the polar co-orindates into cartesian, add on the centre of the big circle and draw the small circle. Hold for as long as you want.

Finding OpenGL image plane coordinates

I am trying to implement a raytracer that uses an arbitrary camera position and perspective projection. I have the camera position, the look at position, the angle of field of view, but I cannot figure out the direction I have to shoot the rays so that each ray corresponds to a pixel. If I could find a way to find the coordinates of the image plane, or the direction vectors the rays should have, it would be downhill from there. Any help is appreciated.
I would do the following: imagine that there is a rectangular grid just in front of your eye. The grid is defined by one point (the (0;0) point of the grid) and two (three dimensional) base vectors (x,y); with this you can calculate a ray as (origin + Xcoordinate * x + Ycoordinate * y) - eye. By adjusting the distance between your eye point, and origin; or by adjusting the length of the base vectors you could get the desired angle of view.