gluPerspective vs. gluOrtho2D - opengl

I have looked at the documentation on MSDN about these 2 functions. However, I don't exactly understand the difference between these 2 functions, other than one is for setting camera view for 3D, and the other one is for setting camera view for 2D. It would be great if it can be answered. Thanks in advance for comments.

An orthographic projection is basically a 3D projection that does not have perspective. Essentially it means that a given position does not get closer to the centre of projection the further it gets from the viewer. Perspective is obviously the opposite. Due to the fact that you divide by w after projecting it means that a value with a larger W (One that is further from the centre of projection in world terms) will "appear" closer to the centre of projection post w-divide. It is this perspective projection and w-divide that gives us the sense of depth in 3D graphics.
If you recall drawing a cube in early maths lessons you will recall that if you draw each of the 2 squares that make up the end of the cube as the same size then the back end of the cube will look larger. This is an orthographic projection. It looks weird because our eyes are used to seeing things with perspective.
IF you shrink that second square then you get perspective and hence the perspective projection.
Wikipedia has some good images demonstrating the difference as well as a good explanation.
Parallel (or Othographic) projection
Perspective (or 3D) projection
a decent explanation of perspective in general

Related

Correct way to translate in a 3D world after rotating?

I have a basic cube generated in my 3D world. I can rotate correctly around the camera, but when I translate after rotating, the translations are not correct.
For example, if I rotate 90 degrees and translate into the Z axis, it would move as if translating in the X axis.
glLoadIdentity();
glRotatef(angle,0,1,0); //Rotate around the camera.
glTranslatef(movX,movY,movZ); //Translate after rotating around the camera.
glCallList(cubes[0]);
I need some help with this. Also, I tried translating before rotating, but the rotation is not at the camera. It is at the edge of the cube.
Keep in mind that in OpenGL that the transformation is applied to the camera, not the objects rendered; so you observe the inverse of the transformation you expected.
Also in OpenGL the Y and Z axes are flipped (Y is vertical), so you observe a horizontal translation instead of a vertical one.
Also, because the object is rotated though 90 degrees about Y, the X and Z axes replace each other (one of them is reversed).
willywonkadailyblah's answer is half correct. Because you are using the old OpenGL, you're using the old matrix stack. You are modifying the modelview matrix when you're doing your glRotatef and glTranslatef calls. The modelview matrix is actually the model's matrix and the camera's view matrix precombined (already multiplied together). These matrices are what determine where your object is in 3D space and where your viewing position/direction of the world is. So you can think of your calls as moving the camera, but it's probably easier to think of them as moving and rotating the world.
These rotate and translation calls are linear transformations. This has a precise definition, but for our purposes it means that you can represent the transformation as a matrix and you multiply it with the point's coordinates to apply the transformation to a point. Now matrix multiplication is not commutative, meaning AB != BA. All this to say that when you rotate, then translate it is different than translating and rotating, which I think you know. But then when you translate, rotate, and translate again, it might be a little more difficult to follow what you're actually doing. Worse even if you throw in some scaling in there. So I would suggest learning how linear transformations work and maintaining your own matrices for the objects and camera if you're serious about learning OpenGL.
learnopengl.org is an excellent website, but it teaches you Modern OpenGL, not what you're currently using. But the lesson on transformations and on coordinate systems are probably generally helpful, even without exact code for you to follow

OpenGL spectrum between perspective and orthogonal projection

In OpenGL (all versions, though I happen to be working in OpenGL ES 2.0) there is the option of using a perspective projection versus an orthogonal one. Is there a way to control the degree of orthogonality?
For the sake of picturing the issue (and please don't take this as the actual question, I am well aware there is no camera in OpenGL) assume that a scene is rendered with the viewport "looking" down the -z axis. Two parallel lines extending a finite distance down the -z axis at (x,y)=1,1 and (x,y)=-1,1 will appear as points in orthogonal projection, or as two lines that eventually converge to a single pixel in perspective projection. Is there a way to have the x- and y- values represented by the outer edges of the screen remain the same as in projection space - I assume this requires not changing the frustum - but have the lines only converge part of the way to a single pixel?
Is there a way to control the degree of orthogonality?
Either something is orthogonal, or it is not. There's no such thing like "just a little orthogonal".
Anyway, from a mathematical point of view, a perspective projection with an infinitely narrow field of view is orthogonal. So you can use glFrustum with a very large near and far plane distance, together with a countering translation in modelview to bring the far away viewing volume back to the origin.

OpenGL orthographic projection

In the description of gluOrtho2d they say it's like glOrtho with near=-1 and far=1.
Why is near behind the viewer.
Why does the matrix described
here: http://www.opengl.org/sdk/docs/man2/xhtml/glOrtho.xml have the
Z-axis inverted?
1) As the name would imply, gluOrtho2d is meant for drawing things where the depth coordinate doesn't really matter. It's set up so you can send 2-component verts to the gpu and the depth just defaults to 0. In this case it makes sense to have a projection where 0 is right in between the near/far planes so you don't have to worry about it. It's worth mentioning that in an orthographic projection the idea of being "behind" the viewer loses some of its meaning anyway because the distance from the viewer to the object has no affect on the projection other than choosing whether or not to draw it at all.
2) Probably because in OpenGL space, the NEGATIVE Z-axis is into the screen, so if glOrtho didn't negate you'd always have to pass in negative values for near and far. Which would be a little weird I guess.

OpenGL define vertex position in pixels

I've been writing a 2D basic game engine in OpenGL/C++ and learning everything as I go along. I'm still rather confused about defining vertices and their "position". That is, I'm still trying to understand the vertex-to-pixels conversion mechanism of OpenGL. Can it be explained briefly or can someone point to an article or something that'll explain this. Thanks!
This is rather basic knowledge that your favourite OpenGL learning resource should teach you as one of the first things. But anyway the standard OpenGL pipeline is as follows:
The vertex position is transformed from object-space (local to some object) into world-space (in respect to some global coordinate system). This transformation specifies where your object (to which the vertices belong) is located in the world
Now the world-space position is transformed into camera/view-space. This transformation is determined by the position and orientation of the virtual camera by which you see the scene. In OpenGL these two transformations are actually combined into one, the modelview matrix, which directly transforms your vertices from object-space to view-space.
Next the projection transformation is applied. Whereas the modelview transformation should consist only of affine transformations (rotation, translation, scaling), the projection transformation can be a perspective one, which basically distorts the objects to realize a real perspective view (with farther away objects being smaller). But in your case of a 2D view it will probably be an orthographic projection, that does nothing more than a translation and scaling. This transformation is represented in OpenGL by the projection matrix.
After these 3 (or 2) transformations (and then following perspective division by the w component, which actually realizes the perspective distortion, if any) what you have are normalized device coordinates. This means after these transformations the coordinates of the visible objects should be in the range [-1,1]. Everything outside this range is clipped away.
In a final step the viewport transformation is applied and the coordinates are transformed from the [-1,1] range into the [0,w]x[0,h]x[0,1] cube (assuming a glViewport(0, w, 0, h) call), which are the vertex' final positions in the framebuffer and therefore its pixel coordinates.
When using a vertex shader, steps 1 to 3 are actually done in the shader and can therefore be done in any way you like, but usually one conforms to this standard modelview -> projection pipeline, too.
The main thing to keep in mind is, that after the modelview and projection transforms every vertex with coordinates outside the [-1,1] range will be clipped away. So the [-1,1]-box determines your visible scene after these two transformations.
So from your question I assume you want to use a 2D coordinate system with units of pixels for your vertex coordinates and transformations? In this case this is best done by using glOrtho(0.0, w, 0.0, h, -1.0, 1.0) with w and h being the dimensions of your viewport. This basically counters the viewport transformation and therefore transforms your vertices from the [0,w]x[0,h]x[-1,1]-box into the [-1,1]-box, which the viewport transformation then transforms back to the [0,w]x[0,h]x[0,1]-box.
These have been quite general explanations without mentioning that the actual transformations are done by matrix-vector-multiplications and without talking about homogenous coordinates, but they should have explained the essentials. This documentation of gluProject might also give you some insight, as it actually models the transformation pipeline for a single vertex. But in this documentation they actually forgot to mention the division by the w component (v" = v' / v'(3)) after the v' = P x M x v step.
EDIT: Don't forget to look at the first link in epatel's answer, which explains the transformation pipeline a bit more practical and detailed.
It is called transformation.
Vertices are set in 3D coordinates which is transformed into a viewport coordinates (into your window view). This transformation can be set in various ways. Orthogonal transformation can be easiest to understand as a starter.
http://www.songho.ca/opengl/gl_transform.html
http://www.opengl.org/wiki/Vertex_Transformation
http://www.falloutsoftware.com/tutorials/gl/gl5.htm
Firstly be aware that OpenGL not uses standard pixel coordinates. I mean by that for particular resolution, ie. 800x600 you dont have horizontal coordinates in range 0-799 or 1-800 stepped by one. You rather have coordinates ranged from -1 to 1 later send to graphic card rasterizing unit and after that matched to particular resolution.
I ommited one step here - before all that you have an ModelViewProjection matrix (or viewProjection matrix in some simple cases) which before all that will cast coordinates you use to an projection plane. Default use of that is to implement a camera which converts 3D space of world (View for placing an camera into right position and Projection for casting 3d coordinates into screen plane. In ModelViewProjection it's also step of placing a model into right place in world).
Another case (and you can use Projection matrix this way to achieve what you want) is to use these matrixes to convert one range of resolutions to another.
And there's a trick you will need. You should read about modelViewProjection matrix and camera in openGL if you want to go serious. But for now I will tell you that with proper matrix you can just cast your own coordinate system (and ie. use ranges 0-799 horizontaly and 0-599 verticaly) to standarized -1:1 range. That way you will not see that underlying openGL api uses his own -1 to 1 system.
The easiest way to achieve this is glOrtho function. Here's the link to documentation:
http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml
This is example of proper usage:
glMatrixMode (GL_PROJECTION)
glLoadIdentity ();
glOrtho (0, 800, 600, 0, 0, 1)
glMatrixMode (GL_MODELVIEW)
Now you can use own modelView matrix ie. for translation (moving) objects but don't touch your projection example. This code should be executed before any drawing commands. (Can be after initializing opengl in fact if you wont use 3d graphics).
And here's working example: http://nehe.gamedev.net/tutorial/2d_texture_font/18002/
Just draw your figures instead of drawing text. And there is another thing - glPushMatrix and glPopMatrix for choosen matrix (in this example projection matrix) - you wont use that until you combining 3d with 2d rendering.
And you can still use model matrix (ie. for placing tiles somewhere in world) and view matrix (in example for zooming view, or scrolling through world - in this case your world can be larger than resolution and you could crop view by simple translations)
After looking at my answer I see it's a little chaotic but If you confused - just read about Model, View, and Projection matixes and try example with glOrtho. If you're still confused feel free to ask.
MSDN has a great explanation. It may be in terms of DirectX but OpenGL is more-or-less the same.
Google for "opengl rendering pipeline". The first five articles all provide good expositions.
The key transition from vertices to pixels (actually, fragments, but you won't be too far off if you think "pixels") is in the rasterization stage, which occurs after all vertices have been transformed from world-coordinates to screen coordinates and clipped.

Defining Light Coordinates

I took a Computer Graphics exam a couple of days ago which had extra credit question like the following:
A light can be defined in one of two ways. It can be defined in world coordinates, e.g. a street light, or in the viewer (eye coordinates), e.g., a head-lamp worn by a miner. In either case the viewpoint can freely change. Describe how the light should be transformed different in these two cases.
Since I won't get to see the results of this until after spring break, I thought I would ask here.
It seems like the analogies being used are misleading - could you not define a light source that is located at the viewers eye in world coordinates just as well as you could in eye coordinates? I've been doing some research on how OpenGL handles light, and it seems as though it always uses eye coordinates - the ModelView matrix would be applied to any light in world coordinates.
In that case the answer may just be that you would have to transform light defined in world coordinates into eye coordinates using something like the ModelView matrix, while light defined in eye coordinates would only need to be transformed by the projection matrix.
Then again I could be totally under thinking (or over thinking this).
Another thought I had is that it determines which way you render shadows, but that has more to do with the location of the light and its type (point, directional, emission, etc) than what coordinates it is represented in.
Any ideas?
The light's position is transformed by the modelview matrix that was active at the moment the light was defined.
If the modelview matrix was identity at that point, you'll get the light in eye coordinates.
If the modelview matrix was the inverse of your camera matrix, you'll get the light in world space.
In the case of the street light, the world coordinates will stay constant when the viewpoint moves.
In the case of the head-lamp, the eye coordinates will stay constant when the viewpoint moves.