I'm having trouble going from the explanation of gluPerspective found here: http://unspecified.wordpress.com/2012/06/21/calculating-the-gluperspective-matrix-and-other-opengl-matrix-maths/ to the actual input parameters needed for the function.
I have a cube that I'm displaying stuff in. The coordinates of the cube range from -10 to 10 in every direction.
Can someone give me an example of the gluPerspective() call needed to display that region? I've tried gluPerspective(26,w/h,10,30) thinking that the angle of 26 degrees is in the angle from the focal point (10 units from the box) to the middle of the box's top side, which means I have 10 units to the close edge and 30 to the far. However when I change from glOrtho(-10.0f, 10.0f, -10.0f, 10.0f, -10.0f, 10.0f); to gluPerspective(...) nothing is displayed on the screen.
You are likely missing a translate to get your model into the view frustum, and your clipping parameters could be a little better. When you use glPerspective the camera starts at the origin, so your camera is inside of the cube you are drawing. You probably can't tell because the faces at z=-10 are getting clipped; change your near clipping plane to 5 or 1 or something and you should see it.
The camera looks down the negative Z axis by default, so you should translate your model by (0, 0, -20) or so. Clipping parameters of near=5 and far=40 should let it be visible; make sure near is greater than 0 at a minimum.
Hope this helps!
Related
I am dealing with an experimental setup where a simple picture is being displayed to a gene-modified fly. The picture is projected on a screen with certain distances to that fly.
Now it's my turn to set up the perspective correction, so that the displayed image, for example a horizontal bar, appears wider in a longer distance to the fly's point of view (experimental setup) . The code now looks like this:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if(!USEFRUSTUM)
gluPerspective(90.0f, 2 * width/height, 0.1f, 100.0f);
else
glFrustum(92.3f, 2.3f, -25.0f, 65.0f, 50.0f, 1000.0f);
The values were entered by someone a few years ago and we figured out they are not accurate anymore. However, I am confused which values to enter or to change to make the projection work properly, because as you can see in the experimental setup the fly's field of view is a bit tilted.
I thought about those values:
fovy = angle between a and c
measure width and height on the projection screen
but what is zNear? Should I measure the distance from fly to the top or the bottom of the screen? I dont't get why somebody entered 0.1f, cause that seems for me too near.
How can I know the value of zFar? Is it the maximum distance of an object to the fly?
I got my information on glPerspective from: https://www.ntu.edu.sg/home/ehchua/programming/opengl/CG_BasicsTheory.html
I also checked Simplest way to set up a 3D OpenGL perspective projection , but this post doesn't treat my experimental setup, which is the source of my confusion.
Thank you for any help!
This is one of the prime examples where the frustum method is easier to use than perspective. A frustum is essentially a clipped pyramid. Imagine your fly at the tip of a four sided, tilted pyramid. The near value gives the distance to the pyramid's base and the left, right, bottom and top the perpendicular distance of each side of the pyramids base to the tip. It's perfectly reasonable that the tip is "outside" of the base area. Or in case of your fly the center might be just above the "left" edge.
So assuming your original picture we have this:
"Near" gives the "distance" to the projection screen and right (and of course also top, bottom and left) the respective distances of the tip, err, fly perpendicular to the edges of the screen.
"far" is not important for the projective features and is used solely for determining the depth range.
So what you can program into is the following:
double near = ${distance to screen in mm};
double left = ${perpendicular to left screen edge in mm};
double right = ${perpendicular to right screen edge in mm};
double top = ${perpendicular to top screen edge in mm};
double bottom = ${perpendicular to bottom screen edge in mm};
double s = ${scaling factor from mm into OpenGL units};
double zrange = ${depth range in OpenGL units the far plane is behind near};
glFrustum(s*left, s*right, s*bottom, s*top, s*near, s*near + zrange);
Keep in mind that left, right, top and bottom may be negative. So say you want a symmetrical (unshifted) view in one direction, that left = -bottom and a shift is essentially adding/subtracting to both the same offset.
I understand that the camera in OpenGL is defined to be looking in the negative Z direction. So in a simple example, I imagine that for my vertices to be rendered, they must be defined similar to the following:
rawverts = {
0.0f, 0.0f, -1.0f,
0.0f, 0.5f, -1.0f,
0.5f, 0.0f, -1.0f,
};
However, absolutely no guide will tell me the answer. Everywhere I look, the "Hello triangle" example is made with the z coordinate left at 0, and whenever a more complex mesh is defined the coordinates are not even shown. I still have no idea regarding the possible values of the coordinates for them to be drawn onto the screen. Take for example, glm::perspective:
glm::mat4 projectionMatrix = glm::perspective(
FoV, // The horizontal Field of View, in degrees : the amount of "zoom". Think "camera lens". Usually between 90° (extra wide) and 30° (quite zoomed in)
4.0f / 3.0f, // Aspect Ratio. Depends on the size of your window. Notice that 4/3 == 800/600 == 1280/960, sounds familiar ?
0.1f, // Near clipping plane. Keep as big as possible, or you'll get precision issues.
100.0f // Far clipping plane. Keep as little as possible.
);
But how can the clipping planes be defined with any positive values? The camera faces the -Z direction! Furthermore, if I create near/far clipping planes at, say, -1 and -4, does this now invalidate any Z coordinate that is more than -1 or less than -4? Or are the raw z coordinates only ever valid between 0 and -1 (again, surely z coordinates categorically cannot be positive?)..?
But let's assume that what actually happens, is that OpenGL (or glm) takes the clipping plane values and secretly negates them. So, my -1 to -4 becomes 1 to 4. Does this now invalidate any Z coordinate that is less than 1 and more than 4, being the reason why 0.0f, 0.0f, -1.0f wont be drawn on the screen?
At this stage, I would treat treat an answer as simply a pointer to a book or some material that has information on this matter.
No, points/vertices can have a positive z coordinate, but you won't see them unless the camera is moved back.
This article talks about that about a third of the way through.
Your problem is that you don't understand the coordinate systems and transformations.
First of there is the window coordinates. It is the pixel grid in your window, pure an simple. There is no z-axis.
Next is NDC. Google it. It is a cube from -1 to 1 in xyz axes. If you load both modelview and projection matrices with identity this is the space you render in. By specifying viewport you transform from NDC to window coordinates. Pixels from vertices outside the cube is clipped.
What you do with projection and modelview matrix is that you create a transformation on the NDC cube, making it cover you objects. When moving the camera, you alter the transform. The transform can translate a vertex from any location to the NDC cube, including negative z-coords.
That is the short version of how things work. The long version is too long to enter here. For more information please ask specific questions or better yet read some litterature on the subject.
Can someone with OpenGl experience please suggest a strategy to help me solve an issue I'm having with rotations?
Imagine a set of world coordinate xyz axes bolted to the center of the universe; that is, for purposes of this
discussion they do not move. I'm also doing no translations, and the camera is fixed,
to keep things simple. I have a cube centered at the origin and the intent is
that pressing the 'x', 'y', and 'z' keys will increment
a variable representing the number of degrees to rotate the cube about the world xyz axes. Each key press is 90° (you can imagine rotating a lego brick
in such a way), so pressing the 'x' key increments a float property RotXdeg:
RotXdeg += 90.0f;
Likewise for the pressing the 'y' and 'z' keys.
A naive way to implement[1] this is:
Gl.glPushMatrix();
Gl.glRotatef(RotXdeg, 1.0f, 0.0f, 0.0f);
Gl.glRotatef(RotYdeg, 0.0f, 1.0f, 0.0f);
Gl.glRotatef(RotZdeg, 0.0f, 0.0f, 1.0f);
Gl.glPopMatrix();
This of course has the effect or rotating the cube, and its local xyz axes, so the desired rotations about the world xyz axes have not been achieved.
(For those not familiar with OpenGl, this can be demonstrated by simply rotating 90° about the x axis
— which causes the local y axis to be oriented along the world z axis —
then a subsequent 90° rotation about the y, which to the user appears to be a rotation
about the world z axis).
I believe this post is asking for something similar, but the answer is not clear, and my understanding is that quaternions are just one way to solve the problem.
It seems to me that
there should be a relatively straightforward solution, even if it is not
particularly efficient. I've spent hours trying various ideas, including creating my own rotation matrices and trying ways to multiply them with the modelview matrix, but to no
avail. (I realize matrix multiplication is not commutative, but I have a feeling that's not the problem.)
([1] By the way, I'm using the Tao OpenGl namespace; thanks to http://vasilydev.blogspot.com for the suggestion.)
Code is here
If the cube lies on (0,0,0) world and local rotations have the same effect. If the cube was in another position a 90deg rotation would result in a quarter-circular orbit around (0,0,0). It is unclear what you are failing to achieve, and i'd also advise against using the old immediate mode for matrix operations. Nevertheless a way to achieve world rotation that way is:
- translate to (0,0,0)
- rotate 90 degrees
- translate back
and this time i have loaded a model successfully! yay!!
but theres a slight problem, one that i had with another obj loader...
heres what it looks like:
http://img132.imageshack.us/i/newglitch2.jpg/
heres another angle if u cant see it right away:
http://img42.imageshack.us/i/newglitch3.jpg/
now this is supposed to look like a cube, but as you can see, the edges of the faces on the cube are being very choppy
is anyone else having this problem, or if anyone knows how to solve this then let me know
also comment if theres any code that needs to be shown, ill be happy to post it.
hey i played around with the code(changed some stuff) and this is what i have come up with
ORIGINAL:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
glTranslatef(0.f, 0.f, -10.0f);
result: choopy image(look at images)
CURRENT:
glMatrixMode(GL_MODELVIEW);
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
glTranslatef(0.f, 0.f, -50.0f);
glLoadIdentity();
result: model is not choppy but cannot move camera(model is right in front of me)
gluPerspective(50.f,(double)800 / (double)600,0.f,200.f);
^^^
|
That's your problem right there ---------------+
The near clip distance must be greater than 0 for perspective projections. Actually you should choose near to be as far away as possible and the far clip plane to be as near as possible.
Say your depth buffer is 16 bits wide, then you slice the scene into 32768 slices. The slice distribution follows a 1/x law. Technically you're dividing by zero.
Well this looks like a projection setting issue. Some parts of your cube, when transformed into clip space, exceed near/far planes.
From what I see you are using orthogonal projection matrix - it's standard for making 2D UI. Please review nearVal and farVal of your glOrtho call. For 2D UI they are usually set as -1 and 1 respectively (or 0 and 1), so may want to either scale down cube or increase view frustum depth by modifying mentioned parameters.
I'm trying to do a simple rotation in OpenGL but must be missing the point.
I'm not looking for a specific fix so much as a quick explanation or link that explains OpenGL rotation more generally.
At the moment I have code like this:
glPushMatrix();
glRotatef(90.0, 0.0, 1.0, 0.0);
glBegin(GL_TRIANGLES);
glVertex3f( 1.0, 1.0, 0.0 );
glVertex3f( 3.0, 2.0, 0.0 );
glVertex3f( 3.0, 1.0, 0.0 );
glEnd();
glPopMatrix();
But the result is not a triangle rotated 90 degrees.
Edit
Hmm thanks to Mike Haboustak - it appeared my code was calling a SetCamera function that use glOrtho. I'm too new to OpenGL to have any idea of what this meant but disabling this and rotating in the Z-axis produced the desired result.
Ensure that you're modifying the modelview matrix by putting the following before the glRotatef call:
glMatrixMode(GL_MODELVIEW);
Otherwise, you may be modifying either the projection or a texture matrix instead.
Do you get a 1 unit straight line? It seems that 90deg rot. around Y is going to have you looking at the side of a triangle with no depth.
You should try rotating around the Z axis instead and see if you get something that makes more sense.
OpenGL has two matrices related to the display of geometry, the ModelView and the Projection. Both are applied to coordinates before the data becomes visible on the screen. First the ModelView matrix is applied, transforming the data from model space into view space. Then the Projection matrix is applied with transforms the data from view space for "projection" on your 2D monitor.
ModelView is used to position multiple objects to their locations in the "world", Projection is used to position the objects onto the screen.
Your code seems fine, so I assume from reading the documentation you know what the nature of functions like glPushMatrix() is. If rotating around Z still doesn't make sense, verify that you're editing the ModelView matrix by calling glMatrixMode.
The "accepted answer" is not fully correct - rotating around the Z will not help you see this triangle unless you've done some strange things prior to this code. Removing a glOrtho(...) call might have corrected the problem in this case, but you still have a couple of other issues.
Two major problems with the code as written:
Have you positioned the camera previously? In OpenGL, the camera is located at the origin, looking down the Z axis, with positive Y as up. In this case, the triangle is being drawn in the same plane as your eye, but up and to the right. Unless you have a very strange projection matrix, you won't see it. gluLookat() is the easiest command to do this, but any command that moves the current matrix (which should be MODELVIEW) can be made to work.
You are drawing the triangle in a left handed, or clockwise method, whereas the default for OpenGL is a right handed, or counterclockwise coordinate system. This means that, if you are culling backfaces (which you are probably not, but will likely move onto as you get more advanced), you would not see the triangle as expected. To see the problem, put your right hand in front of your face and, imagining it is in the X-Y plane, move your fingers in the order you draw the vertices (1,1) to (3,2) to (3,1). When you do this, your thumb is facing away from your face, meaning you are looking at the back side of the triangle. You need to get into the habit of drawing faces in a right handed method, since that is the common way it is done in OpenGL.
The best thing I can recommend is to use the NeHe tutorials - http://nehe.gamedev.net/. They begin by showing you how to set up OpenGL in several systems, move onto drawing triangles, and continue slowly and surely to more advanced topics. They are very easy to follow.
Regarding Projection matrix, you can find a good source to start with here:
http://msdn.microsoft.com/en-us/library/bb147302(VS.85).aspx
It explains a bit about how to construct one type of projection matrix. Orthographic projection is the very basic/primitive form of such a matrix and basically what is does is taking 2 of the 3 axes coordinates and project them to the screen (you can still flip axes and scale them but there is no warp or perspective effect).
transformation of matrices is most likely one of the most important things when rendering in 3D and basically involves 3 matrix stages:
Transform1 = Object coordinates system to World (for example - object rotation and scale)
Transform2 = World coordinates system to Camera (placing the object in the right place)
Transform3 = Camera coordinates system to Screen space (projecting to screen)
Usually the 3 matrix multiplication result is referred to as the WorldViewProjection matrix (if you ever bump into this term), since it transforms the coordinates from Model space through World, then to Camera and finally to the screen representation.
Have fun