i have implemented first person camera in OpenGL, but when i get closer to an object it starts to disappear, so i want to set near plane close to zero so i could get closer to the objects. So if anybody can tell me what is the best way to do that.
Thank you.
Other responses focus on using glut. Glut is not recommended for professional, or even modern, development, and the OP says nothing about using glut - just OpenGL. So, I'll chime in:
A little background
zNear, the distance from the origin to the near clip plane, is one of the parameters used to build the projection matrix:
Projection Matrix
Where
SYMBOL MEANING TYPICAL VALUE
------ -------------- -------------
fov vertical field of view 45 – 90 degrees
aspect Aspect ratio around 1.8 (frame buffer Width / Height)
znear near clip plane +1
zfar far clip plane. 10
SYMBOL MEANING FORMULA
---------- ------------------------------- --------------
halfHeight half of frustum height at znear znear∗tan(fov/2)
halfWidth half of frustum width at znear halfHeight×aspect
depth depth of view frustum zfar−znear
(More nicely-formatted version on http://davidlively.com/programming/graphics/opengl-matrices/perspective-projection/)
When the perspective divide takes place - between the vertex and fragment shaders - the vertices are converted to normalized device coordinates (NDC), in "clip space." In this space, anything that fits in a 2x2x1 (x,y,z) box will be rendered. Any fragments that don't fit in a box with corners (-1, -1, 0) - (+1, +1, +1) will be clipped.
Practical Upshot Being
It doesn't matter what your zNear and zFar values are, as long as they offer sufficient resolution & precision, and zFar > zNear > 0.
Your collision detection & response system is responsible for keeping the camera from getting too close to the geometry. "Too close" is a function of your zNear and geometry bounds. Even if you have a zNear of 1E-9, geometry will get clipped when it gets too close to the clip space origin.
So: fix your collision detection and stop worrying about your zNear.
So if anybody can tell me
gluPerspective.
The near plane is set when you set the projection matrix, either with glFrustum or glOrtho. One of the parameters is the near plane. Notice that the distance to the near plane must be > 0.
You don't have many options.
Cast some rays from the camera (for instance in the 4 corners and in the center), take the shortest minus epsilon, clamped to a decent value > 0. 0.1f will do.
Simply forbid the camera to be here in the first place ! For this you can link it to a sphere in your physics engine, check whether it intersects something, and it it does, move it (how and where to move it is your problem since it's for a large part gameplay. Think of Super Mario Galaxy)
Never set a too little nearPlane. You will run in precision issues with your z-buffer. The farPlane can be quite large though. Values like (0.1, 1000) can be all right depending on your application
Related
I'm working with openGL but this is basically a math question.
I'm trying to calculate the projection matrix, I have a point on the view plane R(x,y,z) and the Normal vector of that plane N(n1,n2,n3).
I also know that the eye is at (0,0,0) which I guess in technical terms its the Perspective Reference Point.
How can I arrive the perspective projection from this data? I know how to do it the regular way where you get the FOV, aspect ration and near and far planes.
I think you created a bit of confusion by putting this question under the "opengl" tag. The problem is that in computer graphics, the term projection is not understood in a strictly mathematical sense.
In maths, a projection is defined (and the following is not the exact mathematical definiton, but just my own paraphrasing) as something which doesn't further change the results when applied twice. Think about it. When you project a point in 3d space to a 2d plane (which is still in that 3d space), each point's projection will end up on that plane. But points which already are on this plane aren't moving at all any more, so you can apply this as many times as you want without changing the outcome any further.
The classic "projection" matrices in computer graphics don't do this. They transfrom the space in a way that a general frustum is mapped to a cube (or cuboid). For that, you basically need all the parameters to describe the frustum, which typically is aspect ratio, field of view angle, and distances to near and far plane, as well as the projection direction and the center point (the latter two are typically implicitely defined by convention). For the general case, there are also the horizontal and vertical asymmetries components (think of it like "lens shift" with projectors). And all of that is what the typical projection matrix in computer graphics represents.
To construct such a matrix from the paramters you have given is not really possible, because you are lacking lots of parameters. Also - and I think this is kind of revealing - you have given a view plane. But the projection matrices discussed so far do not define a view plane - any plane parallel to the near or far plane and in front of the camera can be imagined as the viewing plane (behind the camere would also work, but the image would be mirrored), if you should need one. But in the strict sense, it would only be a "view plane" if all of the projected points would also end up on that plane - which the computer graphics perspective matrix explicitely does'nt do. It instead keeps their 3d distance information - which also means that the operation is invertible, while a classical mathematical projection typically isn't.
From all of that, I simply guess that what you are looking for is a perspective projection from 3D space onto a 2D plane, as opposed to a perspective transformation used for computer graphics. And all parameters you need for that are just the view point and a plane. Note that this is exactly what you have givent: The projection center shall be the origin and R and N define the plane.
Such a projection can also be expressed in terms of a 4x4 homogenous matrix. There is one thing that is not defined in your question: the orientation of the normal. I'm assuming standard maths convention again and assume that the view plane is defined as <N,x> + d = 0. From using R in that equation, we can get d = -N_x*R_x - N_y*R_y - N_z*R_z. So the projection matrix is just
( 1 0 0 0 )
( 0 1 0 0 )
( 0 0 1 0 )
(-N_x/d -N_y/d -N_z/d 0 )
There are a few properties of this matrix. There is a zero column, so it is not invertible. Also note that for every point (s*x, s*y, s*z, 1) you apply this to, the result (after division by resulting w, of course) is just the same no matter what s is - so every point on a line between the origin and (x,y,z) will result in the same projected point - which is what a perspective projection is supposed to do. And finally note that w=(N_x*x + N_y*y + N_z*z)/-d, so for every point fulfilling the above plane equation, w= -d/-d = 1 will result. In combination with the identity transform for the other dimensions, which just means that such a point is unchanged.
Projection matrix must be at (0,0,0) and viewing in Z+ or Z- direction
this is a must because many things in OpenGL depends on it like FOG,lighting ... So if your direction or position is different then you need to move this to camera matrix. Let assume your focal point is (0,0,0) as you stated and the normal vector is (0,0,+/-1)
Z near
is the distance between focal point and projection plane so znear is perpendicular distance of plane and (0,0,0). If assumption is correct then
znear=R.z
otherwise you need to compute that. I think you got everything you need for it
cast line from R with direction N
find closest point to focal point (0,0,0)
and then the z near is the distance of that point to R
Z far
is determined by the depth buffer bit width and z near
zfar=znear*(1<<(cDepthBits-1))
this is the maximal usable zfar (for mine purposes) if you need more precision then lower it a bit do not forget precision is higher near znear and much much worse near zfar. The zfar is usually set to the max view distance and znear computed from it or set to min focus range.
view angle
I use mostly 60 degree view. zang=60.0 [deg]
Common males in my region can see up to 90 degrees but that is peripherial view included the 60 degree view is more comfortable to view.
Females have a bit wider view ... but I did not heard any complains from them on 60 degree views ever so let assume its comfortable for them too...
Aspect
aspect ratio is determined by your OpenGL window dimensions xs,ys
aspect=(xs/ys)
This is how I set the projection matrix:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(zang/aspect,aspect,znear,zfar);
// gluPerspective has inacurate tangens so correct perspective matrix like this:
double perspective[16];
glGetDoublev(GL_PROJECTION_MATRIX,perspective);
perspective[ 0]= 1.0/tan(0.5*zang*deg);
perspective[ 5]=aspect/tan(0.5*zang*deg);
glLoadMatrixd(perspective);
deg = M_PI/180.0
perspective is projection matrix copy I use it for mouse position conversions etc ...
If you do not correct the matrix then you will be off when using advanced things like overlapping more frustrum to get high precision depth range. I use this to obtain <0.1m,1000AU> frustrum with 24bit depth buffer and the inaccuracy would cause the images will not fit perfectly ...
[Notes]
if the focal point is not really (0,0,0) or you are not viewing in Z axis (like you do not have camera matrix but instead use projection matrix for that) then on basic scenes/techniques you will see no problem. They starts with use of advanced graphics. If you use GLSL then you can handle this without problems but fixed OpenGL function can not handle this properly. This is also called PROJECTION_MATRIX abuse
[edit1] few links
If your view is standard frustrum then write the matrix your self gluPerspective otherwise look here Projections for some ideas how to construct it
[edit2]
From your comment I see it like this:
f is your viewing point (axises are the global world axises)
f' is viewing point if R would be the center of screen
so create projection matrix for f' position (as explained above), create transform matrix to transform f' to f. The transformed f must have Z axis the same as in f' the other axises can be obtained by cross product and use that as camera or multiply booth together and use as abused Projection matrix
How to construct the matrix is explained in the Understanding transform matrices link from my earlier comments
I understand that zNear, zFar mark the clipping bounds of a scene. But OpenTK restricts the values to be greater than zero. Does this mean all my objects should be drawn on positive Z axis so that its not clipped ?
No, this is only the render clipping, after viewing translations. So if you render an object based at for example {0,0,-100} with your camera at {0,0,-110} it will still render if within the clipping planes, but stuff further then -110+zFar and -110-zNear will be clipped. That a pretty simple explanation, but in effect how it works.
Don't confuse the different spaces that coordinates can be represented in or translated to. The check marked answer oversimplifies so much so that it's possibly not even a correct answer in many cases.
The zNear and zFar are distances away from the "camera" or the eye of the view in world units, but not in world coordinates. Therefore they do have to be positive numbers. They are also sort of in negative eye or camera space. Only if the camera is aligned with the z axis pointing towards -z space is the check marked answer correct, or your statement about what is being clipped.
They do help define the nearest and farthest clipping bounds, but they do not mark the clipping bounds of a scene. Again, this would depend on your camera's position. And your definition of "a scene".
It's often said that OpenGL has no camera. But I prefer not to think of it that way. There is a view and therefore there is a camera. Using a lookAt function you can place your view/camera anywhere in your scene and point it any direction you want. To me your scene is all the objects that you're rendering, not just the current view of those objects. And thinking of it that way your zNear and zFar don't limit your scene at all, they only limit the viewing depth of your camera. It's more like a tape measure sticking out from your eye. It's vector changes as your view of things change.
For example imagine a scene of a grid of blocks in rows and columns along the x and z axis but all at a y of 0. If your camera is aligned with the z axis then the zNear and zFar relate to the world z axis. You'd probably see rows of blocks going away from camera. But if the camera is floating above everything at a y of +100 and pointed down you'd see it more as a grid. And in that case the zNear and zFar have nothing to do with the z axis in your scene. They only have to do with the camera's clipping on objects in the camera's z axis - in camera space.
I am confused about the position of objects in opengl .The eye position is 0,0,0 , the projection plane is at z = -1 . At this point , will the objects be in between the eye position and and the plane (Z =(0 to -1)) ? or its behind the projection plane ? and also if there is any particular reason for being so?
First of all, there is no eye in modern OpenGL. There is also no camera. There is no projection plane. You define these concepts by yourself; the graphics library does not give them to you. It is your job to transform your object from your coordinate system into clip space in your vertex shader.
I think you are thinking about projection wrong. Projection doesn't move the objects in the same sense that a translation or rotation matrix might. If you take a look at the link above, you can see that in order to render a perspective projection, you calculate the x and y components of the projected coordinate with R = V(ez/pz), where ez is the depth of the projection plane, pz is the depth of the object, V is the coordinate vector, and R is the projection. Almost always you will use ez=1, which makes that equation into R = V/pz, allowing you to place pz in the w coordinate allowing OpenGL to do the "perspective divide" for you. Assuming you have your eye and plane in the correct places, projecting a coordinate is almost as simple as dividing by its z coordinate. Your objects can be anywhere in 3D space (even behind the eye), and you can project them onto your plane so long as you don't divide by zero or invalidate your z coordinate that you use for depth testing.
There is no "projection plane" at z=-1. I don't know where you got this from. The classic GL perspective matrix assumes an eye space where the camera is located at origin and looking into -z direction.
However, there is the near plane at z<0 and eveything in front of the near plane is going to be clipped. You cannot put the near plane at z=0, because then, you would end up with a division by zero when trying to project points on that plane. So there is one reasin that the viewing volume isn't a pyramid with they eye point at the top but a pyramid frustum.
This is btw. also true for real-world eyes or cameras. The projection center lies behind the lense, so no object can get infinitely close to the optical center in either case.
The other reason why you want a big near clipping distance is the precision of the depth buffer. The whole depth range between the front and the near plane has to be mapped to some depth value with a limited amount of bits, typically 24. So you want to keep the far plane as close as possible, and shift away the near plane as far as possible. The non-linear mapping of the screen-space z coordinate makes this even more important, as that the precision is non-uniformely distributed over that range.
I am confused with volume of view which generated from glOrtho method ,
I know that last two parameters are for Z axis ,
first one represent the distance between viewer and near plane and second one represent distance between viewer and far plane .
my question is where the viewer(camera) lies exactly in Z coordinate ?
and in this link program some code that make near plane positive and far plane negative , in this case can we say that Z- is behind the viewer and Z+ is in front of viewer ?
if yes , try to make Z coordinate negative for all vertex of one of triangles , you will note that it appear although it is behind the viewer , why ??
first one represent the distance between viewer and near plane and second one represent distance between viewer and far plane
No, it isn't. An orthographic projection defines a box. The zNear and zFar are the positions of the box, not the distance from the "viewer".
Orthographic projections don't have a "viewer" in the same way that perspective projections do. They have a direction of view, not a view position. And the direction of the view is always the direction that puts zFar the farthest away and zNear being closest. If zNear is larger than zFar, then the direction of view is in the positive Z; otherwise, it's the negative Z.
Actually your question is little bit confusing. I think you can try glLookAt() and make the object appear from a different angle and see the difference. here is a link
http://mycodelog.com/2010/05/28/glcamera/
How can i render a textured plane at some z-pos to be visible towards infinity?
I could achieve this by drawing really huge plane, but if i move my camera off the ground to higher altitude, then i would start to see the plane edges, which i want to avoid being seen.
If this is even possible, i would prefer non-shader method.
Edit: i tried with the 4d coordinate system as suggested, but: it works horribly bad. my textures will get distorted even at camera position 100, so i would have to draw multiple textured quads anyways. perhaps i could do that, and draw the farthest quads with the 4d coordinate system? any better ideas?
Edit2: for those who dont have a clue what opengl texture distortion is, here's example from the tests i did with 4d vertex coords:
(in case image not visible: http://img828.imageshack.us/img828/469/texturedistort.jpg )
note that it only happens when camera gets far enough, in this case its only 100.0 units away from middle! (middle = (0,0) where my 4 triangles starts to go towards infinity). usually this happens around at 100000.0 or something. but with 4d vertices it seems to happen earlier for some reason.
You cannot render an object of infinite size.
You are more than likely confusing the concept of projection with rendering objects of infinite size. A 4D homogeneous coordinate who's W is 0 represents a 3D position that is at infinity relative to the projection. But that doesn't mean a point infinitely far from the camera; it means a point infinitely close to the camera. That is, it represents a point who's Z coordinate (before multiplication with the perspective projection matrix) was equal to the camera position (in camera space, this is 0).
See under perspective projection, a point that is in the same plane as the camera is infinitely far away on the X and Y axes. That is the nature of the perspective projection. 4D homogeneous coordinates allow you to give them all finite numbers, and therefore you can do useful mathematics to them (like clipping).
4D homogeneous coordinates do not allow you to represent an infinitely large surface.
Drawing an infinitely large plane is easy - all you need is to compute the horizon line in screen coordinates. To do so, you have to simply take two non-collinear 4D directions (say, [1, 0, 0, 0] and [0, 0, 1, 0]), then compute their position on the screen (by multiplying manually with the view-matrix and the projection matrix, and then clipping into viewport coordinates. When you have these two points, you can compute a 2D line through the screen and clip it against it. There, you have your infinity plane (the lower polygon). However, it is difficult to display a texture on this plane, because it would be infinitely large. But if your texture is simple (e.g. a grid), then you can compute it yourself with 4D coordinates, using the same schema like above - computing points and their corresponding vanishing point and connecting them.