According to MSDN, we are multiplying the current matrix with the perspective matrix. What matrix are we talking about here? Also from MSDN:
"The glFrustum function multiplies the current matrix by this matrix, with the result replacing the current matrix. That is, if M is the current matrix and F is the frustum perspective matrix, then glFrustum replaces M with M • F."
Now, from my current knowledge of grade 12 calculus (in which I am currently), M is a vector and M dot product F returns a scalar, so how can one replace a vector with a scalar?
I'm also not sure what "clipping" planes are and how they can be referenced via one float value.
Please phrase your answer in terms of the parameters, and also conceptually. I'd really appreciate that.
I'm trying to learn openGL via this tutorial: http://www.youtube.com/watch?v=WdGF7Bw6SUg. It is actually not really good because it explains nothing or else it assumes one knows openGL. I'd really appreciate a link to a tutorial otherwise, I really appreciate your time and responses! I'm sure I can struggle through and figure things out.
You misunderstand what it's saying. M is a matrix. M•F therefore is also a matrix. It constructs a perspective matrix. See this article for an explanation of how it is constructed and when you want to use glFrustum() vs. gluPerspective():
glFrustum() and gluPerspective() both produce perspective projection matrices that you can use to transform from eye coordinate space to clip coordinate space. The primary difference between the two is that glFrustum() is more general and allows off-axis projections, while gluPerspective() only produces symmetrical (on-axis) projections. Indeed, you can use glFrustum() to implement gluPerspective().
Clipping planes are planes that cut out sections of the world so they don't have to be rendered. The frustum describes where the planes are in space. Its sides define the view volume.
glFrustum generates a perspective projection matrix.
This matrix maps a portion of the space (the "frustum") to your screen. Many caveats apply (normalized device coordinates, perspective divide, etc), but that's the idea.
The part that puzzles you is a relic of the deprecated OpenGL API. You can safely ignore it, because usually you apply gluFrustum() on an identity projection matrix, so the multiplication mentioned in the doc has no effect.
A perhaps more comprehensible way to do things is the following :
// Projection matrix : 45° Field of View, 4:3 ratio, display range : 0.1 unit <-> 100 units
glm::mat4 Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
// Or, for an ortho camera :
//glm::mat4 Projection = glm::ortho(-10.0f,10.0f,-10.0f,10.0f,0.0f,100.0f); // In world coordinates
// Camera matrix
glm::mat4 View = glm::lookAt(
glm::vec3(4,3,3), // Camera is at (4,3,3), in World Space
glm::vec3(0,0,0), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0,-1,0 to look upside-down)
);
// Model matrix : an identity matrix (model will be at the origin)
glm::mat4 Model = glm::mat4(1.0f);
// Our ModelViewProjection : multiplication of our 3 matrices
glm::mat4 MVP = Projection * View * Model; // Remember, matrix multiplication is the other way around
Here, I used glm::perspective (the equivalent of old-style gluPerspective() ), but you can use gluFrustum too.
As for the arguments of the function, they are best explained by this part of the doc :
left bottom - nearVal and right top - nearVal specify the points on
the near clipping plane that are mapped to the lower left and upper
right corners of the window, assuming that the eye is located at (0,
0, 0)
If you're unfamiliar with transformation matrices, I wrote a tutorial here; the "Projection Matrix" section especially should help you.
Related
I'm working with openGL but this is basically a math question.
I'm trying to calculate the projection matrix, I have a point on the view plane R(x,y,z) and the Normal vector of that plane N(n1,n2,n3).
I also know that the eye is at (0,0,0) which I guess in technical terms its the Perspective Reference Point.
How can I arrive the perspective projection from this data? I know how to do it the regular way where you get the FOV, aspect ration and near and far planes.
I think you created a bit of confusion by putting this question under the "opengl" tag. The problem is that in computer graphics, the term projection is not understood in a strictly mathematical sense.
In maths, a projection is defined (and the following is not the exact mathematical definiton, but just my own paraphrasing) as something which doesn't further change the results when applied twice. Think about it. When you project a point in 3d space to a 2d plane (which is still in that 3d space), each point's projection will end up on that plane. But points which already are on this plane aren't moving at all any more, so you can apply this as many times as you want without changing the outcome any further.
The classic "projection" matrices in computer graphics don't do this. They transfrom the space in a way that a general frustum is mapped to a cube (or cuboid). For that, you basically need all the parameters to describe the frustum, which typically is aspect ratio, field of view angle, and distances to near and far plane, as well as the projection direction and the center point (the latter two are typically implicitely defined by convention). For the general case, there are also the horizontal and vertical asymmetries components (think of it like "lens shift" with projectors). And all of that is what the typical projection matrix in computer graphics represents.
To construct such a matrix from the paramters you have given is not really possible, because you are lacking lots of parameters. Also - and I think this is kind of revealing - you have given a view plane. But the projection matrices discussed so far do not define a view plane - any plane parallel to the near or far plane and in front of the camera can be imagined as the viewing plane (behind the camere would also work, but the image would be mirrored), if you should need one. But in the strict sense, it would only be a "view plane" if all of the projected points would also end up on that plane - which the computer graphics perspective matrix explicitely does'nt do. It instead keeps their 3d distance information - which also means that the operation is invertible, while a classical mathematical projection typically isn't.
From all of that, I simply guess that what you are looking for is a perspective projection from 3D space onto a 2D plane, as opposed to a perspective transformation used for computer graphics. And all parameters you need for that are just the view point and a plane. Note that this is exactly what you have givent: The projection center shall be the origin and R and N define the plane.
Such a projection can also be expressed in terms of a 4x4 homogenous matrix. There is one thing that is not defined in your question: the orientation of the normal. I'm assuming standard maths convention again and assume that the view plane is defined as <N,x> + d = 0. From using R in that equation, we can get d = -N_x*R_x - N_y*R_y - N_z*R_z. So the projection matrix is just
( 1 0 0 0 )
( 0 1 0 0 )
( 0 0 1 0 )
(-N_x/d -N_y/d -N_z/d 0 )
There are a few properties of this matrix. There is a zero column, so it is not invertible. Also note that for every point (s*x, s*y, s*z, 1) you apply this to, the result (after division by resulting w, of course) is just the same no matter what s is - so every point on a line between the origin and (x,y,z) will result in the same projected point - which is what a perspective projection is supposed to do. And finally note that w=(N_x*x + N_y*y + N_z*z)/-d, so for every point fulfilling the above plane equation, w= -d/-d = 1 will result. In combination with the identity transform for the other dimensions, which just means that such a point is unchanged.
Projection matrix must be at (0,0,0) and viewing in Z+ or Z- direction
this is a must because many things in OpenGL depends on it like FOG,lighting ... So if your direction or position is different then you need to move this to camera matrix. Let assume your focal point is (0,0,0) as you stated and the normal vector is (0,0,+/-1)
Z near
is the distance between focal point and projection plane so znear is perpendicular distance of plane and (0,0,0). If assumption is correct then
znear=R.z
otherwise you need to compute that. I think you got everything you need for it
cast line from R with direction N
find closest point to focal point (0,0,0)
and then the z near is the distance of that point to R
Z far
is determined by the depth buffer bit width and z near
zfar=znear*(1<<(cDepthBits-1))
this is the maximal usable zfar (for mine purposes) if you need more precision then lower it a bit do not forget precision is higher near znear and much much worse near zfar. The zfar is usually set to the max view distance and znear computed from it or set to min focus range.
view angle
I use mostly 60 degree view. zang=60.0 [deg]
Common males in my region can see up to 90 degrees but that is peripherial view included the 60 degree view is more comfortable to view.
Females have a bit wider view ... but I did not heard any complains from them on 60 degree views ever so let assume its comfortable for them too...
Aspect
aspect ratio is determined by your OpenGL window dimensions xs,ys
aspect=(xs/ys)
This is how I set the projection matrix:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(zang/aspect,aspect,znear,zfar);
// gluPerspective has inacurate tangens so correct perspective matrix like this:
double perspective[16];
glGetDoublev(GL_PROJECTION_MATRIX,perspective);
perspective[ 0]= 1.0/tan(0.5*zang*deg);
perspective[ 5]=aspect/tan(0.5*zang*deg);
glLoadMatrixd(perspective);
deg = M_PI/180.0
perspective is projection matrix copy I use it for mouse position conversions etc ...
If you do not correct the matrix then you will be off when using advanced things like overlapping more frustrum to get high precision depth range. I use this to obtain <0.1m,1000AU> frustrum with 24bit depth buffer and the inaccuracy would cause the images will not fit perfectly ...
[Notes]
if the focal point is not really (0,0,0) or you are not viewing in Z axis (like you do not have camera matrix but instead use projection matrix for that) then on basic scenes/techniques you will see no problem. They starts with use of advanced graphics. If you use GLSL then you can handle this without problems but fixed OpenGL function can not handle this properly. This is also called PROJECTION_MATRIX abuse
[edit1] few links
If your view is standard frustrum then write the matrix your self gluPerspective otherwise look here Projections for some ideas how to construct it
[edit2]
From your comment I see it like this:
f is your viewing point (axises are the global world axises)
f' is viewing point if R would be the center of screen
so create projection matrix for f' position (as explained above), create transform matrix to transform f' to f. The transformed f must have Z axis the same as in f' the other axises can be obtained by cross product and use that as camera or multiply booth together and use as abused Projection matrix
How to construct the matrix is explained in the Understanding transform matrices link from my earlier comments
I am having profound issues regarding understanding the transformations involved in VTK. OpenGL has fairly good documentation and I was of the impression that VTK is verym similar to OpenGL (it is, in many ways). But when it comes to transformations, it seems to be an entirely different story.
This is a good OpenGL documentation about transforms involved:
http://www.songho.ca/opengl/gl_transform.html
The perspective projection matrix in OpenGL is:
I wanted to see if this formula applied in VTK will give me the projection matrix of VTK (by cross-checking with VTK projection matrix).
Relevant Camera and Renderer Parameters:
camera->SetPosition(0,0,20);
camera->SetFocalPoint(0,0,0);
double crSet[2] = {10, 1000};
renderer->GetActiveCamera()->SetClippingRange(crSet);
double windowSize[2];
renderWindow->SetSize(1280,720);
renderWindowInteractor->GetSize(windowSize);
proj = renderer->GetActiveCamera()->GetProjectionTransformMatrix(windowSize[0]/windowSize[1], crSet[0], crSet[1]);
The projection transform matrix I got for this configuration is:
The (3,3) and (3,4) values of the projection matrix (lets say it is indexed 1 to 4 for rows and columns) should be - (f+n)/(f-n) and -2*f*n/(f-n) respectively. In my VTK camera settings, the nearz is 10 and farz is 1000 and hence I should get -1.020 and -20.20 respectively in the (3,3) and (3,4) locations of the matrix. But it is -1010 and -10000.
I have changed my clipping range values to see the changes and the (3,3) position is always nearz+farz which makes no sense to me. Also, it would be great if someone can explain why it is 3.7320 in the (1,1) and (2,2) positions. And this value DOES NOT change when I change the window size of the renderer window. Quite perplexing to me.
I see in VTKCamera class reference that GetProjectionTransformMatrix() returns the transformation matrix that maps from camera coordinates to viewport coordinates.
VTK Camera Class Reference
This is a nice depiction of the transforms involved in OpenGL rendering:
OpenGL Projection Matrix is the matrix that maps from eye coordinates to clip coordinates. It is beyond doubt that eye coordinates in OpenGL is the same as camera coordinates in VTK. But is the clip coordinates in OpenGL same as viewport coordinates of VTK?
My aim is to simulate a real webcam camera (already calibrated) in VTK to render a 3D model.
Well, the documentation you linked to actually explains this (emphasis mine):
vtkCamera::GetProjectionTransformMatrix:
Return the projection transform matrix, which converts from camera
coordinates to viewport coordinates. This method computes the aspect,
nearz and farz, then calls the more specific signature of
GetCompositeProjectionTransformMatrix
with:
vtkCamera::GetCompositeProjectionTransformMatrix:
Return the concatenation of the ViewTransform and the
ProjectionTransform. This transform will convert world coordinates to
viewport coordinates. The 'aspect' is the width/height for the
viewport, and the nearz and farz are the Z-buffer values that map to
the near and far clipping planes. The viewport coordinates of a point located inside the frustum are in the range
([-1,+1],[-1,+1], [nearz,farz]).
Note that this neither matches OpenGL's window space nor normalized device space. If find the term "viewport coordinates" for this aa poor choice, but be it as it may. What bugs me more with this is that the matrix actually does not transform to that "viewport space", but to some clip-space equivalent. Only after the perspective divide, the coordinates will be in the range as given for the above definition of the "viewport space".
But is the clip coordinates in OpenGL same as viewport coordinates of
VTK?
So that answer is a clear no. But it is close. Basically, that projection matrix is just a scaled and shiftet along the z dimension, and it is easy to convert between those two. Basically, you can simply take znear and zfar out of VTK's matrix, and put it into that OpenGL projection matrix formula you linked above, replacing just those two matrix elements.
http://www.cs.uaf.edu/2007/spring/cs481/lecture/01_23_matrices.html
I have just finished reading this, but i have 2 questions about multiplications.
gl_Position = gl_ProjectionMatrix*gl_ModelViewMatrix*gl_Vertex
This is the final screen coordinates (x,y) when i multiply them all. What if i only multiply gl_ModelViewMatrix*gl_Vertex or gl_ProjectionMatrix*gl_Vertex ? And what does gl_Vertex mean alone ?
gl_Vertex is the vertex coordinate in world space.
vertices and the eye may be arbitrarily placed and oriented in world space.
After multiplication with ModelViewMatrix, you get a vertex coordinate in 'eye-space', a coordinate system with the eye at 0,0,0. Multiplying by the projection matrix(and doing that homogenous coordinate system division thingy) gives you coordinates in screen space. Those are not pixel-coordinates yet, but some normalized coordinate system with 0,0,0 in the center of the screen/window. Viewport transform is last. It maps the image to window (pixel?) coordinates.
An explanation is given in:
http://www.srk.fer.hr/~unreal/theredbook/
chapter 3.
What if i only multiply gl_ModelViewMatrix*gl_Vertex or gl_ProjectionMatrix*gl_Vertex ?
Then the projection matrix (the 1st case) or model-view matrix (the 2nd case) will not take effect. If they are identity matrices, you will not see effect. If they are not, then your view angle or position might be wrong.
And what does gl_Vertex mean alone ?
gl_Vertex is the vertex coordinate that is passed to the input of the pipeline.
I'm not sure how to use the Unproject method provided by GLM.
Specifically, in what format is the viewport passed in? And why doesn't the function require a view matrix as well as a projection and world matrix?
A bit of history is required here. GLM's unproject is actually a more or less direct replacement for the gluUnProject function that uses deprecated OpenGL fixed-function rendering. In this mode the Model and View matrix were actually combined in the "ModelView" matrix. Apparently, the GLM author dropped the 'view' part in the naming, which confuses thing even more, but it comes down to passing something like view*model.
Now for the actual use:
win is a vector that holds three components that have meaning in Window coordinates. These are coordinates 'x,y' in your viewport, the 'z' coordinate you usually retrieve by reading your depth buffer at (x,y)
model, view and projection matrices should speak for itself if you're even thinking of using this function. But a good (opengl-specific) refresher can be useful.
The viewport is defined as in glViewport, which means (x,y,w,h). X and Y specify the lower left corner of your viewport (usually 0,0). Width and height (w,h). Note that in many other systems x,y specify the upper left corner, you have to transform your y-coordinate then, which is shown in the NeHe code I link to below.
When applied you simply end up by converting the provided window coordinates back to the object coordinates, more or less the inverse of what your render code usually does.
A half-decent explanation on the original gluUnProject can be found as a NeHe article. But of course that is OpenGL-specific, while glm can be used in other contexts.
The viewport is passed in as four floats: the x and y window coordinates of the viewport, followed by its width and height. That's the same order as used e.g. by glGetFloatv(GL_VIEWPORT, ...). So in most cases, the first two values should be 0.
As KillianDS already pointed out, the modelargument in fact is a modelview matrix, see the example use of unProject() in gtx_simd_mat4.cpp, function test_compute_gtx():
glm::mat4 E = glm::translate(D, glm::vec3(1.4f, 1.2f, 1.1f));
glm::mat4 F = glm::perspective(i, 1.5f, 0.1f, 1000.f);
glm::mat4 G = glm::inverse(F * E);
glm::vec3 H = glm::unProject(glm::vec3(i), G, F, E[3]);
As you can see, the matrix passed as the second argument basically is the product of a translation and a perspective transformation.
I'm planning to create a tiled world with OpenGL, with slightly rotated tiles and houses and building in the world will be made of models.
Can anybody suggest me what projection(Orthogonal, Perspective) should I use, and how to setup the View matrix(using OpenGL)?
If you can't figure what style of world I'm planning to create, look at this game:
http://www.youtube.com/watch?v=i6eYtLjFu-Y&feature=PlayList&p=00E63EDCF757EADF&index=2
Using Orhtogonal vs Perspective projection is entirely an art style choice. The Pokemon serious you're talking about is orthogonal -- in fact, it's entirely layered 2D sprites (no 3D involved).
OpenGL has no VIEW matrix. It has a MODELVIEW matrix and a PROJECTION matrix. For Pokemon-style levels, I suggest using simple glOrtho for the projection.
Let's assume your world is in XY space (coordinates for tiles, cameras, and other objects are of the form [x, y, 0]). If a single tile is sized 1,1, then something like glOrtho(12, 9, -10, 10) would be a good projection matrix (12 wide, 9 tall, and Z=0 is the ground plane).
For MODELVIEW, you can start by loading identity, glTranslate() by the tile position, and then glTranslate() by the negative of the camera position, before you draw your geometry. If you want to be able to rotate the camera, you glRotate() by the negative (inverse) of the camera rotation between the two Translate()s. In the end, you end up with the following matrix chain:
output = Projection × (CameraTranslation-1 × CameraRotation-1 × ModelLocation × ModelRotation) × input
The parts in parens are MODELVIEW, and the "-1" means "inverse" which really is negative for translation and transpose for rotation.
If you want to rotate your models, too, you generally do that first of all (before the first glTranslate().
Finally, I suggest the OpenGL forums (www.opengl.org) or the OpenGL subforums of www.gamedev.net might be a better place to ask this question :-)
The projection used by that video game looks Oblique to me. There are many different projections, not just perspective and orthographic. See here for a list of the most common ones: http://en.wikipedia.org/wiki/File:Graphical_projection_comparison.png
You definitely want perspective, with a fixed rotation around the X-axis only. Around 45-60 degrees or thereof. If you don't care about setting up the projection code yourself, the gluPerspective function from the GLU library is handy.
Assuming OpenGL 2.1:
glMatrixMode(GL_PROJECTION); //clip matrix
glLoadIdentity();
gluPerspective(90.0, width/height, 1.0, 20.0);
glMatrixMode(GL_MODELVIEW); //world/object matrix
glLoadIdentity();
glRotatef(45.0f, 1.0f, 0.0f, 0.0f);
/* render */
The last two parameters to gluPerspective is the distance to the near and far clipping planes. Their values depend on the scale you use for the environment.