OpenGL weird vertex shader issue - c++

Before I start my question, a little bit of background. I started learning OpenGL not so long ago, and I have learned most of what I know about it here. I have only really gotten past 2 tutorials, and yes, I know I will eventually have to learn about matrices, but for now, nothing fancy. So let's get on with it.
Okay, so, I simplified my program just a bit, but no worries, it still recreates the same problem. For my example, we are making a purple triangle. I do the usual, initializing GLFW and GLEW, and create a window with the following hints:
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_SAMPLES, 8);
And then I create my window:
GLFWwindow* Window = glfwCreateWindow(640, 480, "Foo", NULL, NULL);
glfwMakeContextCurrent(Window);
glfwSwapInterval(0);
These are my vertices:
float Vertices[] = {
0.0f, 0.5f, 1.0f,
0.5f, -0.5f, 1.0f,
-0.5f, -0.5f, 1.0f
};
My shaders:
const char* vertex_shader =
"#version 330\n"
"in vec3 vp;"
"void main () {"
" gl_Position = vec4 (vp, 1.0);"
"}";
const char* fragment_shader =
"#version 330\n"
"out vec4 frag_colour;"
"void main () {"
" frag_colour = vec4 (0.5, 0.0, 0.5, 1.0);"
"}";
All is good, I compile the whole program, and voila! Purple triangle!
The yellow counter on the top left is FRAPS, by the way.
So, anyways, my brain gets this awesome idea (not really), what if I do this: vec4(vp, vp.z) in the vertex shader? Then I could get some sort of depth just by changing my z's in my buffer, I thought. Note that I wasn't thinking of replacing a perspective matrix, it was just a sort of an experiment. Please don't hate me.
And it worked, by changing the values, I got something that looked like depth, as in it looked like it was getting farther into the distance. Take a look, I changed the top vertex from 1.0 to 6.0:
Now here's the problem: I change the value to 999999999 (9 nines), and I get this:
Seems to work. Little difference from z = 6 though. Change it to 999999999999999999999999 (24 nines)? No difference. Take a look for yourself:
So this is weird. Big difference in numbers, yet little difference visually. Accuracy issues maybe? Multiple 24 nines by 349 and I get the same result. The kicker: Multiply the 24 nines by 350 and the triangle disappears. This is a surprise to me because I thought that the change would be visible and gradual. It clearly wasn't. However, changing the w manually in the vertex shader instead of doing vp.z does seem to give a gradual result, instead of just suddenly disappearing. I hope someone could shed light on this. If you got this far, you're one awesome person for reading through all my crap, for that, I thank you.

Your model can be seen as a simple form of a pinhole camera where the vanishing point for the depth direction is the window center. So, lines that are parallel to the z-axis meet in the center if they are extended. The window center represents the point of infinite depth.
Changing a vertex's z (or w) component from 1 to 6 is a very large change in depth (the vertex is 6 times farther away from the camera than before). That's why the resulting vertex is closer to the screen center than before. If you double the z component again, it will move a bit closer to the screen center (the distance will be halved). But it is obviously already very close to the center, so this change is hardly recognizable. The same applies to the 999999... depth value.
You can observe this property on most natural images, especially with roads:
[Source: http://www.benetemps.com/road-warriors.htm ]
If you walk along the road for - let's say - 5 meters, you'll end up somewhere at the bottom of the image. If you walk five more meters, you continue to the image center. After another five meters you're even closer. But you can see that the distance on the screen gets shorter and shorter the farther away you are.

Nico gave you a great visual explanation of what happens. The same thing can also be explained by using simple math, using the definition of homogeneous coordinates.
Your input coordinates have the form:
(x, y, z)
By using vec4(vp, vp.z) in the vertex shader, you map these coordinates to:
(x, y, z, z)
After the division by w that happens when converting from clip coordinates to normalized device coordinates, this is mapped to:
(x / z, y / z, 1.0f)
As long as z has the value 1.0f, this is obviously still the same as (x, y, z), which explains why the 2nd and 3rd vertex don't change in your experiment.
Now, applying this to your first vertex as you vary the z value, it gets mapped as:
(0.0f, 0.5f, z) --> (0.0f, 0.5f / z, 1.0f)
As you approach infinity with the z value, the y coordinate converges towards 0.5f / infinity, which is 0.0f. Since the center of the screen is at (0.0f, 0.0f), the mapped vertex converges towards the center of the screen.
Also, the vertex moves less and less as the z value increases. Picking a few values:
z = 1.0f --> y = 0.5f
z = 10.0f --> y = 0.05f
z = 100.0f --> y = 0.005f
z = 1000.0f --> y = 0.0005f
For example, when you change z from 100.0f to 1000.0f, y changes by 0.0045, or only a little more than 0.2% of your window height. With a window height of 500 pixels, that would be just about 1 pixel.
Why the triangle disappears completely at a certain value is somewhat more mysterious. I suspect that it must be some kind of overflow/rounding issue during clipping.

Related

Z value always 1 or -1 when using `glm::perspective`

I'm trying to teach myself the ways for 3D programming with OpenGL, however I am struggling with some things, especially projection matrices.
I defined some vertices for a cube and successfully handed them to my graphics processor. The cube goes from xyz -0.5 to xyz 0.5 respectively, which gets rendered fine.
To move it into my world coordinate system, I am using this model matrix:
auto model = glm::mat4(
glm::vec4(1, 0, 0, 0),
glm::vec4(0, 1, 0, 0),
glm::vec4(0, 0, 1, 0),
glm::vec4(0, 0, 0, 1)
);
model = glm::translate(model, glm::vec3(0.f, 0.f, 495.f));
model = glm::scale(model, glm::vec3(100.f, 100.f, 100.f));
This successfully moves my cube to (-50, -50, 445) -> (50, 50, 545) so its now centered in the 200x200x1000 world coordinates I defined for myself.
My camera / view matrix is
auto view = glm::lookAt(
glm::vec3(0.f, 0.f, 5.f),
glm::vec3(0.f, 0.f, 0.f),
glm::vec3(0.f, 1.f, 0.f)
);
which moves the cube slightly closer, changing the z coordinate to 440 and 540 respectively. I don't understand why this is happening but I guess it has something to do with glm expecting a right hand coordinate system while I am working with a left handed one? While this is not why I am posting this question, I would be happy if someone would clear it up for me.
Now to my actual problem: I am trying to make use of glm::perspective. I call it like this:
auto perspective = glm::perspective(glm::radians(55.f), 1.f, 0.f, 1000.f);
If I'm not mistaken, at a z value of 440 I can expect the clipping area to go from roughly -229 to 229, so I would expect that bottom right cube vertex at (-50,-50) is visible. I calculated this by drawing the frustum in 2D, when I noticed that I should be able to calculate the height of any distance to the camera using tan(alpha / 2) * distToCamera = maxVisibleCoordinate (working with a 1:1 aspect ratio). Is this a correct assumption? Here is my terrible drawing, maybe you can tell that I have a wrong understanding of something with it:
In the final step I am trying to get all this together in my vertex shader using
gl_Position = projection * view * model * vec4(pos.x, pos.y, pos.z, 1.0);
which yields a perfectly reasonable result for the x and y value but the z value is always -1 which is, as far as I know, just right for not being displayed.
For my front-bottom-left vertex of the cube (-0.5, -0.5, -0.5) the result is (-96.04, -96.04, -440, -440), normalized to (-0.218, -0.218, -1).
For my back-top-right vertex of the cube (0.5, 0.5, 0.5) the result is (96.04, 96.04, -550, -550), normalized to (0.218, 0.218, -1).
What am I getting wrong, that my z value is lost and just set to -1 instead? When playing around with the camera position, the best I can get is getting it to 1, which also results in an empty window and is definitely not what I would expect.
A projection matrix is like this:
In the picture, f is for zfar and n is for znear.
As you can see, if you put znear = 0, the term at the 4th column become zero, which is incorrect. Also, -(f+n)/(f-n) = -1, which is incorrect too.
So, the conclusion is, znear cannot be zero. It is usually a small value, for example, 0.1
Since Amadeus already answered the question correctly, I'm going to just use this space to add some clarifying information about why it's correct.
We can refer back to the diagram you provided to explain what the problem is: You have two planes, the near plane and the far plane, representing the range at which you may view objects. What the Perspective Matrix does is it takes everything in between those two planes, within the Frustrum that you've defined (mathematically a cone, but our monitors are rectangular, so...) and maps them onto the flat Near-plane to create the final image. In a sense, you can think of the Near Plane as representing the monitor.
So given this context, if you were to set the Near Plane's distance to 0, meaning it was identical to the camera, what would happen? Well, in a cone it would set the plane to a single point, and in a frustrum, it's the same. You cannot view objects drawn onto a single point. You need a surface with actual surface area to draw onto.
That is why it is inappropriate to set the near value to 0. It would turn the drawing surface into a single point, and you cannot mathematically render any objects on a single point. Hence why the essential mathematical formulas backing the matrix will break down and result in bad outcomes if you try to do so anyways.

OpenGL - Object axes orientation; order of glm::translate and glm::rotate

I have found that tilting an object (by 23.4 degrees) changes the local or object space by the same angle. The following code comes before the rendering loop.
model[1] = glm::mat4(1.0f);
...
spheres[1].m_currentPosition = glm::vec3(65.0f, 0.0f, -60.0f);
...
model[1] = glm::translate(model[1], spheres[1].m_currentPosition);
model[1] = glm::rotate(model[1], glm::radians(-23.4f), glm::normalize(glm::vec3(0.0f, 0.0f, 1.0f)));
In the rendering loop I have very little code other than a regular rotation about what I specified before the rendering loop as,
rotationAxis[1] = glm::normalize(glm::vec3(0.0f, 1.0f, 0.0f));
This will cause a rotation about an axis tilted by 23.4 degrees, the following image being a static screen shot:
Where the lines meet at world coordinates (0, 0, 0).
===
If I reverse the first two lines, viz.,
model[1] = glm::rotate(model[1], glm::radians(-23.4f), glm::normalize(glm::vec3(0.0f, 0.0f, 1.0f)));
model[1] = glm::translate(model[1], spheres[1].m_currentPosition);
The result is,
===
In the rendering loop I can rotate the sphere in place about the specified rotationAxis[1], though the rotation is about a tilted 23.4 degree axis running through the blue top and bottom of the sphere in both cases.
Every other change to the (x, y, z) position of the sphere is about this now tilted frame of reference, again in both cases.
What I want is for the sphere to "orbit" in the plane of the horizontal line by calculating new (x, y, z) coordinates and then translating by the difference from the previous (x, y, z) coordinates. This tilt would cause me to have to adjust those coordinates for the tilt. While this is hardly impossible, I am looking for a more straightforward solution, and a better understanding of what is happening.
I have read about the order of translating and rotating in OpenGL, though changing the order does not solve my problem.

shapes skewed when rotated, using openGL, glm math, orthographic projection

For practice I am setting up a 2d/orthographic rendering pipeline in openGL to be used for a simple game, but I am having issues related to the coordinate system.
In short, rotations distort 2d shapes, and I cannot seem to figure why. I am also not entirely sure that my coordinate system is sound.
First I looked for previous answers, but the following (the most relevant 2D opengl rotation causes sprite distortion) indicates that the problem was an incorrect ordering of transformations, but for now I am using just a view matrix and projection matrix, multiplied in the correct order in the vertex shader:
gl_Position = projection * view * model vec4(1.0); //(The model is just the identity matrix.)
To summarize my setup so far:
- I am successfully uploading a quad that should stretch across the whole screen:
GLfloat vertices[] = {
-wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top left
-wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom left
wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom right
wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top right
};
GLuint indices[] = {
0, 1, 2, // first Triangle
2, 3, 0, // second Triangle
};
wf and hf are 1, and I am trying to use a -1 to 1 coordinate system so I don't need to scale by the resolution in shaders (though I am not sure that this is correct to do.)
My viewport and orthographic matrix:
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
...
glm::mat4 mat_ident(1.0f);
glm::mat4 mat_projection = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
... though this clearly does not factor in the screen width and height. I have seen others use width and height instead of 1s, but this seems to break the system or display nothing.
I rotate with a static method that modifies a struct containing a glm::quaternion (time / 1000) to get seconds:
main_cam.rotate((GLfloat)curr_time / TIME_UNIT_TO_SECONDS, 0.0f, 0.0f, 1.0f);
// which does: glm::angleAxis(angle, glm::vec3(x, y, z) * orientation)
Lastly, I pass the matrix as a uniform:
glUniformMatrix4fv(MAT_LOC, 1, GL_FALSE, glm::value_ptr(mat_projection * FreeCamera_calc_view_matrix(&main_cam) * mat_ident));
...and multiply in the vertex shader
gl_Position = u_matrix * vec4(a_position, 1.0);
v_position = a_position.xyz;
The full-screen quad rotates on its center (0, 0 as I wanted), but its length and width distort, which means that I didn't set something correctly.
My best guess is that I haven't created the right ortho matrix, but admittedly I have had trouble finding anything else on stack overflow or elsewhere that might help debug. Most answers suggest that the matrix multiplication order is wrong, but that is not the case here.
A secondary question is--should I not set my coordinates to 1/-1 in the context of a 2d game? I did so in order to make writing shaders easier. I am also concerned about character/object movement once I add model matrices.
What might be causing the issue? If I need to multiply the arguments to gl::ortho by width and height, then how do I transform coordinates so v_position (my "in"/"varying" interpolated version of the position attribute) works in -1 to 1 as it should in a shader? What are the implications of choosing a particular coordinates system when it comes to ease of placing entities? The game will use sprites and textures, so I was considering a pixel coordinate system, but that quickly became very challenging to reason about on the shader side. I would much rather have THIS working.
Thank you for your help.
EDIT: Is it possible that my varying/interpolated v_position should be set to the calculated gl_Position value instead of the attribute position?
Try accounting for the aspect ratio of the window you are displaying on in the first two parameters of glm::ortho to reflect the aspect ratio of your display.
GLfloat aspectRatio = SCREEN_WIDTH / SCREEN_HEIGHT;
glm::mat4 mat_projection = glm::ortho(-aspectRatio, aspectRatio, -1.0f, 1.0f, -1.0f, 1.0f);

OpenGL showing more/less of the world through resize

So a lot of questions online about resizing have been about maintaining the right ratios and avoid stretching etc. From what I understand, this would be done by setting the new ratio with gluOrtho2D.
However, I wasn't sure exactly how to go about showing MORE and LESS of the world upon resize. E.g. you have a plane that could travel from 0 to 100 along the x axis. Upon resizing, it should now (still same size) travel from 0 to 200.
EDIT: so what I mean is, I want everything in my game to stay the same size as before, but the "sky" if you will, should be bigger upon the resize, and my plane should be able to fly into that sky (since currently I have code that limits it to within the screen).
Similarly, if my screen is smaller, then the plane should no longer be able to fly to the section of the 'sky' that no longer exists
Initially, I'm setting up my program using the following lines, where everything about the game is stored in 'game', and XSize, YSize returns the size of the screen.
void init(void) {
glClearColor(0.0, 0.0, 0.3, 0.0); /* set background color to a dark blue */
glColor3f(1.0, 1.0, 1.0); /* set drawing color to white */
glMatrixMode(GL_PROJECTION);
glEnable (GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glLoadIdentity();
gluOrtho2D(0, game.getXSize()*game.getAspect(), 0, game.getYSize() / game.getAspect()); /* defines world window */
}
int main(int argc, char *argv[]) {
game = GameManager(GAMENAME, 1000, 750, 60);
/*SETUP*/
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(game.getXSize(), game.getYSize());
glutCreateWindow(GAMENAME);
/*Other GLUT main function lines here */
glutReshapeFunc(resize);
}
When I try to set up the gluOrtho2D in resize, however, the program sets up the background and stops drawing anything at all.
void resize(int w, int h){
game.setScreenSize(w,h);
glViewport(0,0,width,height)
const GLfloat aspectRatio = (GLfloat)game.getXSize() / (GLfloat)game.getYSize();
gluOrtho2D(0, game.getXSize()*game.getAspect(), 0, game.getYSize() / game.getAspect());
}
I have, of course, managed to just use glViewport(0,0,w,h) by itself, but that's pretty much the same as not doing anything at all (the graphics just stretch, and functions I'm using to move objects to the mouse position no longer work properly), since glViewport is called by default if I don't create a Reshape function.
The general way world coordinates get mapped to screen in OpenGL is:
world coordinates --> clip space coordinates --> device coordinates
The "world coordinates" are just whatever you feed to OpenGL as vertex data. They can be whatever you want, there are no rules.
The vertex shader (or matrix stack, if you are time traveling to the 1990s) is what transforms world coordinates to clip space coordinates.
The clip space coordinates go from –1…+1. So (–1,–1) is the lower-left corner of the window, (–1,+1) is the top left, (+1,+1) is the top right, etc. This is the same no matter what size your window is. So if your window gets larger, the picture will also get larger, unless you scale down the clip space coordinates at the same time.
So if you want to keep the same world coordinates and keep the same size in pixels, you have to change the way world coordinates are transformed to clip space. In general, this means that if your window gets twice as big, your clip space coordinates should get half as big, in order to keep everything the same size.
Typically, to achieve this, you'll end up multiplying in a matrix that looks something like this:
int windowWidth = ..., windowHeight = ...;
double matrix[2][2] = {
{ 1.0 / windowWidth, 0.0 },
{ 0.0, 1.0 / windowHeight },
};
That's if you're using a 2D matrix. Change this appropriately if you are using glOrtho or for your particular vertex shader. Or just read the manual for glOrtho.
By using:
gluOrtho2D(-1.0f, 1.0f, -1.0f, 1.0f);
Which would be the same as:
glOrtho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
Then I'm assuming your problem is that when you scale a scene like this, then it ends up looking like this:
As you say this can be fixed by taking the aspect ratio into account. Given the width and height of your screen. Then you can calculate the aspect ratio and set the proper orthographic projection:
const GLfloat aspectRatio = (GLfloat)width / (GLfloat)height;
gluOrtho2D(-aspectRatio, aspectRatio, -1.0f, 1.0f);
This now results in everything scaling in relation to the aspect ratio, and subsequently allowing you to see more.
Since the above is actually a sphere in 3D, setting the near and far values is also needed:
glOrtho(-aspectRatio, aspectRatio, -1.0f, 1.0f, 1.0f, 100.0f);

Opengl transformations - perspective division confusion

I am trying to learn some OpenGL basics by reading OpenGL Superbible.
I am at the beginning of the 4th chapter and I have a question about the transformations.
Firstly, relevant link:http://www.songho.ca/opengl/gl_transform.html
If I understand this pipeline (so to speak) right, if in my code I would have something like this
const float vertexPositions[] = {
0.75f, 0.75f, 0.0f, 1.0f,
0.75f, -0.75f, 0.0f, 1.0f,
-0.75f, -0.75f, 0.0f, 1.0f,
};
those coordinates are in so called object space coordinates, and I can specify each value as something in [-1,1] range.
After applying viewmodel matrix, each vertex coordinates can be any number and those coordinates will be in so called eye coordinates.
After applying projection matrix (be it perspective projection) we are in clip space, and still the numbers can have any possible value.
Now here is the point I am wondering about. In this page it is said that for each vertex x,y,z coordinate we are diving it by fourth value w, which is present because we are using homogeneous coordinate system, and after the division, x,y,z are in range [-1,1].
My question is, how can be sure that after all those transformations the value of w will be sufficient enough, that after dividing x,y,z by it we will get something in range [-1,1]?
… object space coordinates, and I can specify each value as something in [-1,1] range.
You're not limited in the range for object coordinates.
My question is, how can be sure that after all those transformations the value of w will be sufficient enough, that after dividing x,y,z by it we will get something in range [-1,1]?
The range [-1, 1] is the range of what will be in the viewport after transformation. Everything outside that range is outside the viewport and hence clipped. There's nothing to ensure about this. If things are in range, they are visible, if not, they are outside the viewport window.