At first I wondered why NDC ranges from -1 to 1, rather than from 0 to 1. I figured maybe having the origin at the center is useful for something.
But why is it using a left-handed coordinate system?
Could it be just so that the Z-value is higher for objects farther away? That would be a good enough reason for me.
But why is it using a left-handed coordinate system?
Let's add "by default" to that question. Because all that's needed to change it is glDepthRange(1.0f, 0.0f); and now you're right-handed. And yes, that is perfectly legal GL syntax; there has never been a restriction that the range near z is less than the range far z.
Why is it left-handed by default? Maybe someone on the ARB liked it that way. Maybe someone on the ARB liked increasing depth range instead of decreasing, and the lower-left corner orientation made that necessary. Maybe because the progenitor of OpenGL, IRIX GL, did it this way and the ARB didn't see any need to change it.
The reason for this is immaterial; handedness is purely a notational convenience. In some places, right-handed makes sense. In others, left-handed makes sense. And since it is trivially changed, just use what works for you.
It would then be consistent with everything else.
Consistent with what everything else? All of the fixed-function stuff that's been removed?
The viewport transform is all done by OpenGL internally, where nobody can get at it. You don't even provide a matrix; you just provide a viewport and depth range. So from the perspective of someone using fixed-function GL, everything really is right-handed.
The only time the handedness even comes up is when dealing with vertex shaders directly, where you have to know what the handedness of the clip-space is. And in that case, the change in handedness is a simple function of the perspective projection negating the Z. Or, if you like left-handed coordinates, the perspective projection not negating the Z. Or again, just reverse the glDepthRange and now you're right-handed.
But why is it using a left-handed coordinate system?
So that increasing distances from the projection plane in view direction map to increasing values of the depth buffer.
Could it be just so that the Z-value is higher for objects farther away? That would be a good enough reason for me.
Well, that's exactly why it is that way.
Guess the change in handedness is owned to how graphics cards interpret z values. Greater z value -> greater depth. The implied change of handedness can be performed by the projection matrix without additional costs. Changing it later would add an additional computation step.
Related
I haven't done much OpenGL stuff, I assume there is an obvious cause of what I am seeing.
Basically the pictures below explain everything.
The only thing changed between the images is the x rotation on the projection matrix.
OpenGL 1.1 on Windows.
Any help is much appreciated.
This looks like a ortho projection. I suggest you increase the range between the near and far value. Unlike perspective projection, where due to nonlinearity you should limit the near-to-far range to what's absolutely necessary, you can safely choose a near/far range for ortho that's very wide without running into trouble. I suggest you use [-1000, 1000] (with a 24 bit depth buffer that gives you a depth resolution of ~1/8000 viewspace Z units).
If I pick a spot on my monitor in screen X/Y, how can I obtain the point in 3D space, based on my projection and view matrices?
For example, I want to put an object at depth and have it located at 10,10 in screen coords. So when I update its world matrix it will render onscreen at 10,10.
I presume it's fairly straightforward given I have my camera matrices, but I'm not sure offhand how to 'reverse' the normal process.
DirectXTk XMMath would be best, but I can no doubt sort it out from any linear algebra system (OpenGL, D3DX, etc).
What I'm actually trying to do is find a random point on the back clipping plane where I can start an object that then drifts straight towards the camera along its projection line. So I want to keep picking points in deep space that are still within my view (no point creating ones outside in my case) and starting alien ships (or whatever) at that point.
As discussed in my comments, you need four things to do this generally.
ModelView (GL) or View (D3D / general) matrix
Projection matrix
Viewport
Depth Range (let us assume default, [0, 1])
What you are trying to do is locate in world-space a point that lies on the far clipping plane at a specific x,y coordinate in window-space. The point you are looking for is <x,y,1> (z=1 corresponds to the far plane in window-space).
Given this point, you need to transform back to NDC-space
The specifics are actually API-dependent since D3D's definition of NDC is different from OpenGL's -- they do not agree on the range of Z (D3D = [0, 1], GL = [-1, 1]).
Once in NDC-space, you can apply the inverse Projection matrix to transform back to view-space.
These are homogeneous coordinates and division by W is necessary.
From view-space, apply the inverse View matrix to arrive at a point in world-space that satisfies your criteria.
Most math libraries have a function called UnProject (...) that will do all of this for you. I would suggest using that because you tagged this question D3D and OpenGL and the specifics of some of these transformations is different depending on API.
You are better off knowing how they work, even if you never implement them yourself. I think the key thing you were missing was the viewport, I have an answer here that explains this step visually.
I am trying to understand open GL concepts . While reading this tutorial - http://www.arcsynthesis.org/gltut/Positioning/Tut04%20Perspective%20Projection.html,
I came accross this statement :
This is because camera space and NDC space have different viewing directions. In camera space, the camera looks down the -Z axis; more negative Z values are farther away. In NDC space, the camera looks down the +Z axis; more positive Z values are farther away. The diagram flips the axis so that the viewing direction can remain the same between the two images (up is away).
I am confused as to why the viewing direction has to change . Could some one please help me understand this with an example ?
This is mostly just a convention. OpenGL clip space (and NDC space and screen space) has always been defined as left-handed (with z pointing away into the screen) by the spec.
OpenGL eye space had been defined with camera at origin and looking at -z direction (so right-handed). However, this convention was just meaningful in the fixed-function pipeline, where together with the fixed function per vertex lighting which was carried out in eye space, the viewing direction did matter cases like whenGL_LOCAL_VIEWER was disabled (as was the default).
The classic GL projection matrix typically converts the handedness, and the perspecitve division is done with a divisior of -z_eye, typically, so the last row of the projection matrix is typically (0, 0, -1, 0). The old glFrustum(), glOrtho(), and gluPerspective() actually supported that convention by using the z_near and z_far clipping distances negated, so that you had to specify positive values for clip planes to lie before the camera at z<0.
However, with modern GL, this convention is more or less meaningless. There is no fixed-function unit left which does work in eye space, so the eye space (and anything before that) is totally under the user's control. You can use anything you like here. The clip space and all the later spaces are still used by fixed function units (clipping, rasterization, ...), so there most be some convention to define the interface, and it is still a left-handed system.
Even in modern GL, the old right-handed eye space convention is still in use. The popular glm library for example reimplements the old GL matrix functions the same way.
There is really no reason to prefer one of the possible conventions over the other, but at some point, you have to choose and stick to one.
In OpenGL (all versions, though I happen to be working in OpenGL ES 2.0) there is the option of using a perspective projection versus an orthogonal one. Is there a way to control the degree of orthogonality?
For the sake of picturing the issue (and please don't take this as the actual question, I am well aware there is no camera in OpenGL) assume that a scene is rendered with the viewport "looking" down the -z axis. Two parallel lines extending a finite distance down the -z axis at (x,y)=1,1 and (x,y)=-1,1 will appear as points in orthogonal projection, or as two lines that eventually converge to a single pixel in perspective projection. Is there a way to have the x- and y- values represented by the outer edges of the screen remain the same as in projection space - I assume this requires not changing the frustum - but have the lines only converge part of the way to a single pixel?
Is there a way to control the degree of orthogonality?
Either something is orthogonal, or it is not. There's no such thing like "just a little orthogonal".
Anyway, from a mathematical point of view, a perspective projection with an infinitely narrow field of view is orthogonal. So you can use glFrustum with a very large near and far plane distance, together with a countering translation in modelview to bring the far away viewing volume back to the origin.
I've been learning OpenGL, and the one topic that continues to baffle me is the far clipping plane. While I can understand the reasoning behind the near clipping plane, and the side clipping planes (which never have any real effect because objects outside them would never be rendered anyway), the far clipping plane seems only to be an annoyance.
Since those behind OpenGL have obviously thought this through, I know there must be something I am missing. Why does OpenGL have a far clipping plane? More importantly, because you cannot turn it off, what are the recommended idioms and practices to use when drawing things at huge distances (for objects such as stars thousands of units away in a space game, a skybox, etc.)? Are you expected just to make the clipping plane very far away, or is there a more elegant solution? How is this done in production software?
The only reason is depth-precision. Since you only have a limited number of bits in the depth buffer, you can also just represent a finite amount of depth with it.
However, you can set the far plane to infinitely far away: See this. It just won't work very well with the depth buffer - you will see a lot of artifacts if you have occlusion far away.
So since this revolves around the depth buffer, you won't have a problem dealing with further-away stuff, as long as you don't use it. For example, a common technique is to render the scene in "slabs" that each only use the depth buffer internally (for all the stuff in one slab) but some form of painter's algorithm externally (for the slabs, so you draw the furthest one first)
Why does OpenGL have a far clipping plane?
Because computers are finite.
There are generally two ways to attempt to deal with this. One way is to construct the projection by taking the limit as z-far approaches infinity. This will converge on finite values, but it can play havoc with your depth precision for distant objects.
An alternative (if you're willing to have objects beyond a certain distance fail to depth-test correctly at all) is to turn on depth clamping with glEnable(GL_DEPTH_CLAMP). This will prevent clipping against the near and far planes; it's just that any fragments that would have normalized z coordinates outside of the [-1, 1] range will be clamped to that range. As previously indicated, it screws up depth tests between fragments that are being clamped, but usually those objects are far away.
It's just "the fact" that OpenGL depth test was performed in Window Space Coordinates (Normalized device coordinates in [-1,1]^3. With extra scaling glViewport and glDepthRange).
So from my point of view it's one of the design point of view of the OpenGL library.
One of approach to eliminate this OpenGL extension/OpenGL core functionality https://www.opengl.org/registry/specs/ARB/depth_clamp.txt if it is available in your OpenGL version.
I want to describe that in the perspective projection there is nothing about "far clipping plane".
3.1 For perspective projection you need to setup point \vec{c} as center of projection and plane on which projection will be performed. Let's call it
image plane T: (\vec{r}-\vec{r_0},\vec{n})
3.2 Let's assume that projected plane T split arbitary point \vec{r} and \vec{c} central of projection. In other case \vec{r} and \vec{c} are in one hafe-space and point \vec{r} should be discarded.
3.4 The idea of projection is to find intersection \vec{i} with plane T
\vec{i}=(1-t)\vec{c}+t\vec{r}
3.5 As it is
(\vec{i}-\vec{r_0},\vec{n})=0
=>
( (1-t)\vec{c}+t\vec{r}-\vec{r_0},\vec{n})=0
=>
( \vec{c}+t(\vec{r}-\vec{c})-\vec{r_0},\vec{n})=0
3.6. From "3.5" derived t can be subtitute into "3.4" and you will receive projection into plane T.
3.7. After projection you point will lie in the plane. But if assume that image plane is parallel to OXY plane, then I can suggest to use original "depth" for point after projection.
So from geometry point of view it is possible not to use far plane at all. As also not to use [-1,1]^3 model explicitly at all.
p.s. I don't know how to type latex formulas in correct way, s.t. they will be rendered.