I understand that the camera in OpenGL is defined to be looking in the negative Z direction. So in a simple example, I imagine that for my vertices to be rendered, they must be defined similar to the following:
rawverts = {
0.0f, 0.0f, -1.0f,
0.0f, 0.5f, -1.0f,
0.5f, 0.0f, -1.0f,
};
However, absolutely no guide will tell me the answer. Everywhere I look, the "Hello triangle" example is made with the z coordinate left at 0, and whenever a more complex mesh is defined the coordinates are not even shown. I still have no idea regarding the possible values of the coordinates for them to be drawn onto the screen. Take for example, glm::perspective:
glm::mat4 projectionMatrix = glm::perspective(
FoV, // The horizontal Field of View, in degrees : the amount of "zoom". Think "camera lens". Usually between 90° (extra wide) and 30° (quite zoomed in)
4.0f / 3.0f, // Aspect Ratio. Depends on the size of your window. Notice that 4/3 == 800/600 == 1280/960, sounds familiar ?
0.1f, // Near clipping plane. Keep as big as possible, or you'll get precision issues.
100.0f // Far clipping plane. Keep as little as possible.
);
But how can the clipping planes be defined with any positive values? The camera faces the -Z direction! Furthermore, if I create near/far clipping planes at, say, -1 and -4, does this now invalidate any Z coordinate that is more than -1 or less than -4? Or are the raw z coordinates only ever valid between 0 and -1 (again, surely z coordinates categorically cannot be positive?)..?
But let's assume that what actually happens, is that OpenGL (or glm) takes the clipping plane values and secretly negates them. So, my -1 to -4 becomes 1 to 4. Does this now invalidate any Z coordinate that is less than 1 and more than 4, being the reason why 0.0f, 0.0f, -1.0f wont be drawn on the screen?
At this stage, I would treat treat an answer as simply a pointer to a book or some material that has information on this matter.
No, points/vertices can have a positive z coordinate, but you won't see them unless the camera is moved back.
This article talks about that about a third of the way through.
Your problem is that you don't understand the coordinate systems and transformations.
First of there is the window coordinates. It is the pixel grid in your window, pure an simple. There is no z-axis.
Next is NDC. Google it. It is a cube from -1 to 1 in xyz axes. If you load both modelview and projection matrices with identity this is the space you render in. By specifying viewport you transform from NDC to window coordinates. Pixels from vertices outside the cube is clipped.
What you do with projection and modelview matrix is that you create a transformation on the NDC cube, making it cover you objects. When moving the camera, you alter the transform. The transform can translate a vertex from any location to the NDC cube, including negative z-coords.
That is the short version of how things work. The long version is too long to enter here. For more information please ask specific questions or better yet read some litterature on the subject.
Related
I am making program which reads texture that should be applied to the mesh and generates some shapes which should be displayed on their triangles. I am converting points in a way that originally shape appears to be lying on XZ (in openGL way of marking axes, so Y is vertical, Z goes towards camera, X to the right). Now I have no idea how to properly measure angle between actual normal of traingle and vertical normal (I mean (0, 1, 0)) of image. I know, that it's probably basic, but my mind refuses to cooperate on 3D graphics tasks recently.
Currently I use
angles.x = glm::orientedAngle(glm::vec2(normalOfTriangle.z, normalOfTriangle.y), glm::vec2(1.0f, 0.0f));
angles.y = glm::orientedAngle(glm::vec2(normalOfTriangle.x, normalOfTriangle.z), glm::vec2(1.0f, 0.0f));
angles.z = glm::orientedAngle(glm::vec2(normalOfTriangle.x, normalOfTriangle.y), glm::vec2(1.0f, 0.0f));
angles = angles + glm::vec3(-glm::half_pi<float>(), 0.0f, glm::half_pi<float>());
Which given my way of thinking should give proper results, but the faces of cube that should have normal parallel to Z axis appear to be unrotated in Z.
My logic bases on that I measure angle from each axis, and then rotate each axis by such angle for it to be vertical. But as I said, my mind glitches, and I cannot find proper way to do it. Can somebody please help?
After searching many pages, glm documentation, tutorials...etc, I'm still confused on some things.
I'm trying to understand why I need to apply the following transformations to get my 800x600 (fullscreen square, assume the screen of the user is 800x600 for this minimal example) image to draw over everything. Assume I'm only drawing CCW triangles. Everything renders fine in my code, but I have to do the following:
// Vertex data (x/y/z), using EBOs
0.0f, 600.0f, 1.0f,
800.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
800.0f, 600.0f, 1.0f
// Later on...
glm::mat4 m, v, p;
m = scale(m, glm::vec3(-1.0, 1.0, 1.0));
v = rotate(v, glm::radians(180.0f), glm::vec3(0.0f, 1.0f, 0.0f));
p = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
(Note that just since I used the variable names m, v, and p doesn't mean they're actually the proper transformation for that name, the above just does what I want it to)
I'm confused on the following:
Where is the orthographic bounds? I assume it's pointing down the negative z-axis, but where do the left/right bounds come in? Does that mean [-400, 400] on the x-axis maps to [-1.0, 1.0] NDC, or that [0, 800] maps to it? (I assume whatever answer here applies to the y-axis). Then documentation just says Creates a matrix for an orthographic parallel viewing volume.
What happens if you flip the following third and fourth arguments (I ask because I see people doing this and I don't know if it's a mistake/typo or it works by a fluke... or if it properly works regardless):
Args three and four here:
_____________
| These two |
p1 = glm::ortho(0.0f, 800.0f, 600.0f, 0.0f, 0.5f, 1.5f);
p2 = glm::ortho(0.0f, 800.0f, 0.0f, 600.0f, 0.5f, 1.5f);
Now I assume this third question will be answered with the above two, but I'm trying to figure out if this is why my first piece of code requires me flipping everything on the x-axis to work... which I will admit I was just messing around with it and it happened to work. I figure I need a 180 degree rotation to turn my plane around so it's on the -z side however... so that just leaves me with figuring out the -1.0, 1.0, 1.0 scaling.
The code provided in this example (minus the variable names) is the only stuff I use and the rendering works perfectly... it's just my lack of knowledge as to why it works that I'm unhappy with.
EDIT: Was trying to understand it from here by using the images and descriptions on the site as a single example of reference. I may have missed the point.
EDIT2: As a random question, since I always draw my plane at z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible (ex: 0.99, 1.01) for any reason? Assume nothing else is drawn or will be drawn.
You can assume the visible area in a orthographic projection to be a cube given in view space. This cube is then mapped to the [-1,1] cube in NDC coordinates, such that everything inside the cube is visible and everything outside will be clipped away. Generally, the viewer looks along the negative Z-axis, while +x is right and +Y is up.
How are the orthographic bounds mapped to NDC space?
The side length of the cube are given by the parameters passed to glOrtho. In the first example, parameters for left and right are [0, 800], thus the space from 0 to 800 along the X axis is mapped to [-1, 1] along the NDC X axis. Similar logic happens along the other two axes (top/bottom along y, near/far along -z).
What happens when the top and bottom parameters are exchanged?
Interchanging, for example, top and bottom is equivalent to mirroring the scene along this axis. If you look at second diagonal element of a orthographic matrix, this is defined as 2 / (top - bottom). By exchanging top and bottom only the sign of this element changes. The same also works for exchanging left with right or near with far. Sometimes this is used when the screen-space origin should be the lower left corner instead of upper left.
Why do you have to rotate the quad by 180° and mirror it?
As described above, near and far values are along the negative Z-axis. Values of [0.5, 1.5] along -Z mean [-0.5, -1.5] in world space coordinates. Since the plane is defined a z=1.0 this is outside the visible area. By rotating it around the origin by 180 degrees moves it to z=-1.0, but now you are looking at it from the back, which means back-face culling strikes. By mirroring it along X, the winding order is changed and thus back and front side are changed.
Since I always draw my plane at Z = 1.0, should I restrict my orthographic projection near/far plane to be as close to 1.0 as possible?
As long as you don't draw anything else, you can basically choose whatever you want. When multiple objects are drawn, then the range between near and far defines how precise differences in depth can be stored.
Can someone with OpenGl experience please suggest a strategy to help me solve an issue I'm having with rotations?
Imagine a set of world coordinate xyz axes bolted to the center of the universe; that is, for purposes of this
discussion they do not move. I'm also doing no translations, and the camera is fixed,
to keep things simple. I have a cube centered at the origin and the intent is
that pressing the 'x', 'y', and 'z' keys will increment
a variable representing the number of degrees to rotate the cube about the world xyz axes. Each key press is 90° (you can imagine rotating a lego brick
in such a way), so pressing the 'x' key increments a float property RotXdeg:
RotXdeg += 90.0f;
Likewise for the pressing the 'y' and 'z' keys.
A naive way to implement[1] this is:
Gl.glPushMatrix();
Gl.glRotatef(RotXdeg, 1.0f, 0.0f, 0.0f);
Gl.glRotatef(RotYdeg, 0.0f, 1.0f, 0.0f);
Gl.glRotatef(RotZdeg, 0.0f, 0.0f, 1.0f);
Gl.glPopMatrix();
This of course has the effect or rotating the cube, and its local xyz axes, so the desired rotations about the world xyz axes have not been achieved.
(For those not familiar with OpenGl, this can be demonstrated by simply rotating 90° about the x axis
— which causes the local y axis to be oriented along the world z axis —
then a subsequent 90° rotation about the y, which to the user appears to be a rotation
about the world z axis).
I believe this post is asking for something similar, but the answer is not clear, and my understanding is that quaternions are just one way to solve the problem.
It seems to me that
there should be a relatively straightforward solution, even if it is not
particularly efficient. I've spent hours trying various ideas, including creating my own rotation matrices and trying ways to multiply them with the modelview matrix, but to no
avail. (I realize matrix multiplication is not commutative, but I have a feeling that's not the problem.)
([1] By the way, I'm using the Tao OpenGl namespace; thanks to http://vasilydev.blogspot.com for the suggestion.)
Code is here
If the cube lies on (0,0,0) world and local rotations have the same effect. If the cube was in another position a 90deg rotation would result in a quarter-circular orbit around (0,0,0). It is unclear what you are failing to achieve, and i'd also advise against using the old immediate mode for matrix operations. Nevertheless a way to achieve world rotation that way is:
- translate to (0,0,0)
- rotate 90 degrees
- translate back
I'm trying to rotate a cube around the axis and what I'm doing is:
glTranslatef(0.0f, 0.0f, -60.0f);
glRotatef(angle, 0.0f, 1.0f, 0.0f);
I'm expecting it to move to -60 and rotate around the y axis in circle, but instead it's just spinning around it self at -60 coordinate. When I write it like this:
glRotatef(angle, 0.0f, 1.0f, 0.0f);
glTranslatef(0.0f, 0.0f, -60.0f);
I get what I need but I don't understand why?
Why are they doing to opposite?
Can someone please explain.
When you apply a transform it is applied locally. Think of it as a coordinate system that you are moving around. You start with the coordinate system representing your view, and then you transform that coordinate system relative to itself. So in the first case, you are translating the coordinate system -60 along the Z axis of the coordinate system, and then you are rotating the coordinate system around the new Y axis at the new origin. Anything you draw is then drawn in that new coordinate system.
This actually provides a simpler way to think about transformations once you are used to it. You don't have to keep two separate coordinate systems in mind: one for the coordinate system that the transforms are applied in and one for the coordinate system that the geometry is drawn in.
I've been trying to render a GL_QUAD (which is shaped as a trapezoid) with a square texture. I'd like to try and use OpenGL only to pull this off. Right now the texture is getting heavily distorted and it's really annoying.
Normally, I would load the texture compute a homography but that means a lot of work and an additional linear programming library/direct linear transform function. I'm under the impression OpenGL can simplify this process for me.
I've looked around the web and have seen "Perspective-Correct Texturing, Q Coordinates, and GLSL" and "Skewed/Sheared Texture Mapping in OpenGL".
These all seem to assume you'll do some type of homography computation or use some parts of OpenGL I'm ignorant of ... any advice?
Update:
I've been reading "Navigating Static Environments Using Image-Space Simplification and Morphing" [PDF] - page 9 appendix A.
It looks like they disable perspective correction by multiplying the (s,t,r,q) texture coordinate with the vertex of a model's world space z component.
so for a given texture coordinate (s, r, t, q) for a quad that's shaped as a trapezoid, where the 4 components are:
(0.0f, 0.0f, 0.0f, 1.0f),
(0.0f, 1.0f, 0.0f, 1.0f),
(1.0f, 1.0f, 0.0f, 1.0f),
(1.0f, 0.0f, 0.0f, 1.0f)
This is as easy as glTexCoord4f (svert.z, rvert.z, t, q*vert.z)? Or am I missing some step? like messing with the GL_TEXTURE glMatrixMode?
Update #2:
That did the trick! Keep it in mind folks, this problem is all over the web and there weren't any easy answers. Most involved directly recalculating the texture with a homography between the original shape and the transformed shape...aka lots of linear algebra and an external BLAS lib dependency.
Here is a good explanation of the issue & solution.
http://www.xyzw.us/~cass/qcoord/
working link: http://replay.web.archive.org/20080209130648/http://www.r3.nu/~cass/qcoord/
Partly copied and adapted from above link, created by Cass
One of the more interesting aspects of texture mapping is the space that texture coordinates live in. Most of us like to think of texture space as a simple 2D affine plane. In most cases this is perfectly acceptable, and very intuitive, but there are times when it becomes problematic.
For example, suppose you have a quad that is trapezoidal in its spatial coordinates but square in its texture coordinates.
OpenGL will divide the quad into triangles and compute the slopes of the texture coordinates (ds/dx, ds/dy, dt/dx, dt/dy) and use those to interpolate the texture coordinate over the interior of the polygon. For the lower left triangle, dx = 1 and ds = 1, but for the upper right triangle, dx < 1 while ds = 1. This makes ds/dx for the upper right triangle greater than ds/dx for the lower one. This produces an unpleasant image when texture mapped.
Texture space is not simply a 2D affine plane even though we generally leave the r=0 and q=1defaults alone. It's really a full-up projective space (P3)! This is good, because instead of specifying the texture coordinates for the upper vertices as (s,t) coordinates of (0, 1) and (1, 1), we can specify them as (s,t,r,q) coordinates of (0, width, 0, width) and (width, width, 0, width)! These coordinates correspond to the same location in the texture image, but LOOK at what happened to ds/dx - it's now the same for both triangles!! They both have the same dq/dx and dq/dy as well.
Note that it is still in the z=0 plane. It can become quite confusing when using this technique with a perspective camera projection because of the "false depth perception" that this produces. Still, it may be better than using only (s,t). That is for you to decide.
I would guess that most people wanting to fit a rectangular texture on a trapezoid are thinking of one of two results:
perspective projection: the trapezoid looks like a rectangle seen from an oblique angle.
"stretchy" transformation: the trapezoid looks like a rectangular piece of rubber that has been stretched/shrunk into shape.
Most solutions here on SO fall into the first group, whereas I recently found myself in the second.
The easiest way I found to achieve effect 2. was to split the trapezoid into a rectangle and right triangles. In my case the trapezoid was regular, so a quad and two triangles solved the problem.
Hope this can help:
Quoted from the paper:
"
At each pixel, a division is performed using the interpolated values of (s=w; t=w; r=w; q=w), yielding (s=q; t=q), which
are the final texture coordinates. To disable this effect, which is not
possible in OpenGL directly. "
In GLSL, (now at least) this is possible. You can add:
noperspective out vec4 v_TexCoord;
there's an explanation:
https://www.geeks3d.com/20130514/opengl-interpolation-qualifiers-glsl-tutorial/