OpenGL How glFrustum() works? [duplicate] - opengl

This question already has answers here:
The purpose of Model View Projection Matrix
(2 answers)
What exactly are eye space coordinates?
(3 answers)
Object, world, camera and projection spaces in OpenGL
(1 answer)
Closed 4 months ago.
As far as I understand, the value of near is always greater than far value. (near 1f, far 0.4f here).
Gl.glFrustum(-1f, 1f, -1f, 1f, 1f, 0.4f);
since the Z axis is directed towards us, and the further away we are from it, the smaller the value.
I hope I got that right?
my three-dimensional figure lies within the values 0.3 and -0.4 on the z axis (when we draw a figure, the z axis is directed away from the observer, not towards him, right?)
I set the following values for glfrustum and when I started the program, I didn't see anything on the screen. But when I turned the figure relative to the x axis by 20 degrees, I began to see it(I turned the figure towards me). That is, I set near and far, and the figure did not fall into the field of view, but when I turned it a little, it began to fall into this view area, right?
Gl.glFrustum(-1f, 1f, -1f, 1f, 1f, 0.4f);
and why is it that when I set the value 0 for far, I see the figure in a normal state, as if I didn't use glFrustum at all?
Gl.glFrustum(-1f, 1f, -1f, 1f, 1f, 0.0f);
please tell me if I made a mistake somewhere in my reasoning. Most of all, I am concerned about the question of whether, when applying glFrustum(), the z-axis is now directed towards the observer, and not away from him?

Related

Why does glm::lookAt work like this in Vulkan? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What exactly is the UP vector in OpenGL's LookAt function?
This is related to: What exactly is the UP vector in OpenGL's LookAt function?
If the call is:
gluLookAt(512, 384, 2000,
512, 384, 0,
0.0f, 1.0f, 0.0f);
If I am sitting on a chair looking straight ahead, holding an iPad right in front of my eyes, then my top of my head is pointing up the sky. Therefore the (0, 1, 0) for the UP vector, as on the 3rd row. How about if I change it to (0, 0.00001, 1)? That means I am almost lying down, with now my face and eyes facing the sky. So how come the result is exactly the same as when I use (0, 1, 0)?
What could you possibly expect to happen?
You pass 3 sets of values: a camera position, a position for the camera to look at, and the direction of up. In your analogy, if you're looking up at the sky, you're not looking at your iPad. Therefore, your look-at position must have changed along with your up direction. And if you didn't change your look-at position, then what do you expect to happen when you change the up direction?
The up direction only affects where up is relative to where you're looking. If you want to change what you're looking at, you must actually change the look-at point. That's why it's there.
After more trial and error, as I just started learning OpenGL for one day, is that, the Up vector must have some components in the plane that is "normal" (or perpendicular) to the camera to target vector.
In other words, in the example, it is from (512, 384, 2000) to (512, 384, 0), so the vector is only in the Z-direction. The Up vector must have some components on the XY plane (XY plane is the one that is perpendicular to the vector that has only the Z direction).
If there is no x and no y component, that is, if they are both 0, then on my iPad 2, the image is not displayed at all. So the Up vector deals with rotation in the XY plane in this case, and not case about the Z direction at all.

Understanding how glm::ortho()'s arguments affect vertex location after projection

After searching many pages, glm documentation, tutorials...etc, I'm still confused on some things.
I'm trying to understand why I need to apply the following transformations to get my 800x600 (fullscreen square, assume the screen of the user is 800x600 for this minimal example) image to draw over everything. Assume I'm only drawing CCW triangles. Everything renders fine in my code, but I have to do the following:
// Vertex data (x/y/z), using EBOs
0.0f, 600.0f, 1.0f,
800.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
800.0f, 600.0f, 1.0f
// Later on...
glm::mat4 m, v, p;
m = scale(m, glm::vec3(-1.0, 1.0, 1.0));
v = rotate(v, glm::radians(180.0f), glm::vec3(0.0f, 1.0f, 0.0f));
p = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
(Note that just since I used the variable names m, v, and p doesn't mean they're actually the proper transformation for that name, the above just does what I want it to)
I'm confused on the following:
Where is the orthographic bounds? I assume it's pointing down the negative z-axis, but where do the left/right bounds come in? Does that mean [-400, 400] on the x-axis maps to [-1.0, 1.0] NDC, or that [0, 800] maps to it? (I assume whatever answer here applies to the y-axis). Then documentation just says Creates a matrix for an orthographic parallel viewing volume.
What happens if you flip the following third and fourth arguments (I ask because I see people doing this and I don't know if it's a mistake/typo or it works by a fluke... or if it properly works regardless):
Args three and four here:
_____________
| These two |
p1 = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
p2 = glm::ortho(0.0f, 800.0f, 0.0f, 600.0f, 0.5f, 1.5f);
Now I assume this third question will be answered with the above two, but I'm trying to figure out if this is why my first piece of code requires me flipping everything on the x-axis to work... which I will admit I was just messing around with it and it happened to work. I figure I need a 180 degree rotation to turn my plane around so it's on the -z side however... so that just leaves me with figuring out the -1.0, 1.0, 1.0 scaling.
The code provided in this example (minus the variable names) is the only stuff I use and the rendering works perfectly... it's just my lack of knowledge as to why it works that I'm unhappy with.
EDIT: Was trying to understand it from here by using the images and descriptions on the site as a single example of reference. I may have missed the point.
EDIT2: As a random question, since I always draw my plane at z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible (ex: 0.99, 1.01) for any reason? Assume nothing else is drawn or will be drawn.
You can assume the visible area in a orthographic projection to be a cube given in view space. This cube is then mapped to the [-1,1] cube in NDC coordinates, such that everything inside the cube is visible and everything outside will be clipped away. Generally, the viewer looks along the negative Z-axis, while +x is right and +Y is up.
How are the orthographic bounds mapped to NDC space?
The side length of the cube are given by the parameters passed to glOrtho. In the first example, parameters for left and right are [0, 800], thus the space from 0 to 800 along the X axis is mapped to [-1, 1] along the NDC X axis. Similar logic happens along the other two axes (top/bottom along y, near/far along -z).
What happens when the top and bottom parameters are exchanged?
Interchanging, for example, top and bottom is equivalent to mirroring the scene along this axis. If you look at second diagonal element of a orthographic matrix, this is defined as 2 / (top - bottom). By exchanging top and bottom only the sign of this element changes. The same also works for exchanging left with right or near with far. Sometimes this is used when the screen-space origin should be the lower left corner instead of upper left.
Why do you have to rotate the quad by 180° and mirror it?
As described above, near and far values are along the negative Z-axis. Values of [0.5, 1.5] along -Z mean [-0.5, -1.5] in world space coordinates. Since the plane is defined a z=1.0 this is outside the visible area. By rotating it around the origin by 180 degrees moves it to z=-1.0, but now you are looking at it from the back, which means back-face culling strikes. By mirroring it along X, the winding order is changed and thus back and front side are changed.
Since I always draw my plane at Z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible?
As long as you don't draw anything else, you can basically choose whatever you want. When multiple objects are drawn, then the range between near and far defines how precise differences in depth can be stored.

OpenGL gluPerspective Issues, No Display

I'm having trouble going from the explanation of gluPerspective found here: http://unspecified.wordpress.com/2012/06/21/calculating-the-gluperspective-matrix-and-other-opengl-matrix-maths/ to the actual input parameters needed for the function.
I have a cube that I'm displaying stuff in. The coordinates of the cube range from -10 to 10 in every direction.
Can someone give me an example of the gluPerspective() call needed to display that region? I've tried gluPerspective(26,w/h,10,30) thinking that the angle of 26 degrees is in the angle from the focal point (10 units from the box) to the middle of the box's top side, which means I have 10 units to the close edge and 30 to the far. However when I change from glOrtho(-10.0f, 10.0f, -10.0f, 10.0f, -10.0f, 10.0f); to gluPerspective(...) nothing is displayed on the screen.
You are likely missing a translate to get your model into the view frustum, and your clipping parameters could be a little better. When you use glPerspective the camera starts at the origin, so your camera is inside of the cube you are drawing. You probably can't tell because the faces at z=-10 are getting clipped; change your near clipping plane to 5 or 1 or something and you should see it.
The camera looks down the negative Z axis by default, so you should translate your model by (0, 0, -20) or so. Clipping parameters of near=5 and far=40 should let it be visible; make sure near is greater than 0 at a minimum.
Hope this helps!

Why in OpenGL's LookAt, if our face is facing almost up the sky, we can still see things right in front of us? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What exactly is the UP vector in OpenGL's LookAt function?
This is related to: What exactly is the UP vector in OpenGL's LookAt function?
If the call is:
gluLookAt(512, 384, 2000,
512, 384, 0,
0.0f, 1.0f, 0.0f);
If I am sitting on a chair looking straight ahead, holding an iPad right in front of my eyes, then my top of my head is pointing up the sky. Therefore the (0, 1, 0) for the UP vector, as on the 3rd row. How about if I change it to (0, 0.00001, 1)? That means I am almost lying down, with now my face and eyes facing the sky. So how come the result is exactly the same as when I use (0, 1, 0)?
What could you possibly expect to happen?
You pass 3 sets of values: a camera position, a position for the camera to look at, and the direction of up. In your analogy, if you're looking up at the sky, you're not looking at your iPad. Therefore, your look-at position must have changed along with your up direction. And if you didn't change your look-at position, then what do you expect to happen when you change the up direction?
The up direction only affects where up is relative to where you're looking. If you want to change what you're looking at, you must actually change the look-at point. That's why it's there.
After more trial and error, as I just started learning OpenGL for one day, is that, the Up vector must have some components in the plane that is "normal" (or perpendicular) to the camera to target vector.
In other words, in the example, it is from (512, 384, 2000) to (512, 384, 0), so the vector is only in the Z-direction. The Up vector must have some components on the XY plane (XY plane is the one that is perpendicular to the vector that has only the Z direction).
If there is no x and no y component, that is, if they are both 0, then on my iPad 2, the image is not displayed at all. So the Up vector deals with rotation in the XY plane in this case, and not case about the Z direction at all.

Difficulties adjusting to OpenGL on the Mac

I know OpenGL itself is a frequently asked question, but I couldn't find a solution to this specific problem I'm having. I've been following NeHe's tutorials, and I've ran into some issues which I don't think should be happening:
When calling glRotatef, where the first parameter being the angle, it appears to be the speed of rotation instead.
Example:
glRotatef(0, 0.0f, 1.0f, 0.0f); // despite the constant numbers, the object rotates infinitely
I am using an NSTimer to loop through the drawing method, which I may think be part of the issue.
Instead of the object rotating 360 degrees around like it should, the object's angle will increment to 180 then decrement back to 0. This is the same with 2D and 3D objects.
I saw example code from Apple and other places that didn't have the same problem as I did, but I was never able to figure out what exactly I am doing wrong that gives me these issues.
The code you have there glRotatef(0,0.0f,1.0f,0.0f); does not change the rotation at all, it simply requests a rotation of 0 degrees around the Y axis. If you want an object to rotate smoothly as time progresses I would suggest the following:
Keep a counter that increments every time your timer triggers, then, before you draw whatever object you are displaying, reset the transformation matrix with glLoadIdentity() and then call glRotatef( counter , 0.0f, 1.0f , 0.0f )