I have the scene with one simple triangle. And i am using perspective projection. I have my MVP matrix set up (with the help of GLM) like this:
glm::mat4 Projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f);
glm::mat4 View = glm::lookAt(
glm::vec3(0,0,5), // Camera is at (0,0,5), in World Space
glm::vec3(0,0,0), // and looks at the origin
glm::vec3(0,1,0) // Head is up (set to 0,-1,0 to look upside-down)
);
glm::mat4 Model = glm::mat4(1.0f);
glm::mat4 MVP = Projection * View * Model;
And it all works ok, i can change the values of the camera and the triangle is still displayed properly.
But i want to use orthographic projection. And when i change the projection matrix to orthographic, it works unpredictable, i can't display the triangle, or i just see one small part of it in the corner of the screen. To use the orthographic projection, i do this:
glm::mat4 Projection = glm::ortho( 0.0f, 800.0f, 600.0f, 0.0f,-5.0f, 5.0f);
while i don't change anything in View and Model matrices. And i just doesn't work properly.
I just need a push in the right direction, am i doing something wrong? What am i missing, what should i do to properly set up orthographic projection?
PS i don't know if it's needed, but these are the coordinates of the triangle:
static const GLfloat g_triangle[] = {
-1.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
0.0f, 2.0f, 0.0f,
};
Your triangle is about 1 unit large. If you use an orthographic projection matrix that is 800/600 units wide, it is natural that you triangle appears very small. Just decrease the bounds of the orthographic matrix and make sure that the triangle is inside this area (e.g. the first vertex is outside of the view, because its x-coordinate is less than 0).
Furthermore, make sure that your triangle is not erased by backface culling or z-clipping. Btw.. negative values for zNear are a bit unusual but should work for orthographic projections.
Related
I am trying to create a 2D, top down, style camera in OpenGL. I would like to stick to the convention of using model-view-projection matrices, this is so I can switch between a 3D view and a top down view while the application runs. I am actually using the glm::lookAt method to create the view matrix.
However there is something missing in my understanding. I am rendering a triangle on the screen, [very close to this tutorial][1], and that works perfectly fine (so no problems with windowing, display loops, vertex buffers, etc). The triangle is centered at (0, 0), and vertices are on -0.5/0.5 (so already in NDC).
I then added a uniform mat4 mpv; to the vertex shader. If I set the mpv matrix to:
glm::vec3 camera_pos = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 target_pos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 view = glm::lookAt(camera_pos, target_pos, glm::vec3(0.0f, 1.0f, 0.0f));
I get the same, unmodified triangle as expected as these are (from my understanding) the default values for OpenGL.
Now I thought if I changed the Z value of the camera position it would have the same effect as zooming in and out, however all I get is the clear color, no triangle is rendered.
// Trying to simulate zoom in and out by changing z value of camera
glm::vec3 camera_pos = glm::vec3(0.0f, 0.0f, -3.0f);
glm::vec3 target_pos = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 view = glm::lookAt(camera_pos, target_pos, glm::vec3(0.0f, 1.0f, 0.0f));
So I printed the view matrix, and noticed that all I was doing was translating the Z value, which makes sense.
I then added an ortho projection matrix, to make sure everything is in NDC, but I still get nothing.
// *2 cause im on a Mac/high-res screen and the frame buffer scale is 2.
// Doing projection * view in one step and just updating view uniform until I get it working.
view = glm::ortho(0.0f, 800.0f * 2, 0.0f, 600.0f * 2, 0.1f, 100.0f) * view;
Where is my misunderstanding taking place. I would like to:
Simulate a top down view where I can zoom in and out on the target.
Create a 2D camera that follows a target (racing car), so the camera_pos XY and target_pos XY will be the same.
Eventually add an option to switch to a 3D following camera, like a standard racing game 3rd person view, hence the MPV vs just using simple translations.
[1]: https://learnopengl.com/Getting-started/Hello-Triangle
The vertex coordinates are in the range [-0.5, 0.5], but the orthographic projection projects the cuboid volume with the left, bottom, near point _(0, 0, 0.1) and the right, top, far point (800.0 * 2, 600.0 * 2, 100) of the viewport.
Therefore, the triangle mesh just covers one fragment in the lower left of the viewport.
Change the orthographic projection:
view = glm::ortho(0.0f, 800.0f * 2, 0.0f, 600.0f * 2, 0.1f, 100.0f) * view;
view = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, 0.1f, 100.0f) * view;
I have a problem with my ray generation that I do not understand. The direction for my ray is computed wrongly. I ported this code from DirectX 11 to Vulkan, where it works fine, so I was surprised I could not get it to work:
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;
r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz - camPos.xyz);
Yet this code works perfectly:
vec4 nearPos = inverseViewProj * vec4(screenPos, 0, 1);
nearPos /= nearPos.w;
vec4 farPos = inverseViewProj * vec4(screenPos, 1, 1);
farPos /= farPos.w;
r.Origin = camPos.xyz;
r.Direction = normalize(farPos.xyz – nearPos.xyz);
[Edit] Matrix and camera positions are set like this:
const glm::mat4 clip(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 0.5f, 1.0f);
projMatrix = clip * glm::perspectiveFov(FieldOfView, float(ViewWidth), float(ViewHeight), NearZ, FarZ);
viewMatrix = glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));
buffer.inverseViewProjMatrix = glm::inverse(projMatrix * viewMatrix);
buffer.camPos = viewMatrix[3];
[Edit2] What I see on screen is correct if I start at the origin. However, if I move left, for example, it looks as if I am moving right. All my rays seem to be perturbed. In some cases, strafing the camera looks as if I am moving around a different point in space. I assume the camera position is not equal to the singularity of my perspective matrix, yet I can not figure out why.
I think I am misunderstanding something basic. What am I missing?
Thanks to the comments I have found the problem. I was building my view matrix incorrectly, in the exact same way as in this post:
glm::inverse(glm::translate(glm::toMat4(Rotation), -Position));
This is equal to translating first and then rotating, which of course leads to something unwanted. In addition, the Position was negative and camPos was obtained using the last column of the view matrix instead of the inverse view matrix, which is wrong.
It was not noticable with my fractal raycaster simply because I never moved far away from the origin. That, and the fact that there is no point of reference in such an environment.
I found this code on internet http://rioki.org/2013/03/07/glsl-skybox.html for cubemap enviromental texture(actually rendering a skybox). But i do not understand it why it works.
void main()
{
mat4 r = gl_ModelViewMatrix;
r[3][0] = 0.0;
r[3][1] = 0.0;
r[3][2] = 0.0;
vec4 v = inverse(r) * inverse(gl_ProjectionMatrix) * gl_Vertex;
gl_TexCoord[0] = v;
gl_Position = gl_Vertex;
}
So gl_Vertex is in world coordinates but what do we get by multiplying that by inverse of projection matrix and then modelview matrix?
this is the code I use to draw my skybox
void SkyBoxDraw(void)
{
GLfloat SkyRad = 1.0f;
glUseProgramObjectARB(glsl_program_skybox);
glDepthMask(0);
glDisable(GL_DEPTH_TEST);
// Cull backs of polygons
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glEnable(GL_TEXTURE_CUBE_MAP);
glBegin(GL_QUADS);
//////////////////////////////////////////////
// Negative X
glTexCoord3f(-1.0f, -1.0f, 1.0f);
glVertex3f(-SkyRad, -SkyRad, SkyRad);
glTexCoord3f(-1.0f, -1.0f, -1.0f);
glVertex3f(-SkyRad, -SkyRad, -SkyRad);
glTexCoord3f(-1.0f, 1.0f, -1.0f);
glVertex3f(-SkyRad, SkyRad, -SkyRad);
glTexCoord3f(-1.0f, 1.0f, 1.0f);
glVertex3f(-SkyRad, SkyRad, SkyRad);
......
......
glEnd();
glDepthMask(1);
glDisable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_CUBE_MAP);
glUseProgramObjectARB(0);
}
So gl_Vertex is in world coordinates ...
No no no, gl_Vertex is in object/model-space unless I see some code elsewhere (e.g. how your vertex position is calculated in the actual non-shader portion of your program) that indicates otherwise :) In OpenGL we skip from object-space to eye/view/camera-space when we multiply by the combined Model*View matrix. As you can see, there are lots of names for the same coordinate spaces, but object-space definitely is not a synonym for world-space. Setting r3 to < 0, 0, 0, 1 > basically re-positions the camera's origin without affecting direction, which is useful when all you want to know is the direction for a cubemap lookup.
That is, in a nutshell, what you want when using cubemaps. Just a simple direction vector. The fact that textureCube (...) takes a 3D vector instead of 4D is an immediate hint that it is looking for a direction instead of position. Position vectors have a 4th component, direction do not. So, technically if you wanted to port this shader to modern OpenGL you would probably use an out vec3 and swizzle .xyz off of v, since v.w is unnecessary.
... but what do we get by multiplying that by inverse of projection matrix and then modelview matrix?
You are basically undoing projection when you multiply by the inverse of these matrices. The only way this shader makes sense is if the coordinates you are passing for your vertices are defined in clip-space. So instead of going from object-space through the GL pipeline and winding up in screen-space at the end you want the reverse of that, only since the viewport is not involved in your shader we cannot be dealing with screen-space. A little bit more information on how your vertex positions are calculated should clear this up.
I am looking for some examples of (non-deprecated code) OpenGL that actually show code/conversion, step-by-step, between coordinates from start to finish, in a 2D way that use non-standard coordinates. i.e.
GLfloat Square[] =
{
-5.5f, -5.0f, 0.0f, 1.0f,
-5.5f, 5.0f, 0.0f, 1.0f,
5.5f, 5.0f, 0.0f, 1.0f,
5.5f, -5.0f, 0.0f, 1.0f
};
And then using coordinates such as the above, taking these & convert them into necessary coordinates so they map to -1,1 etc & output to screen.
Every single example, tutorial I have found on the net etc, only do so using coordinates in the above range between -1<>1.
Environment: C++.
Well if you use glm (http://glm.g-truc.net/index.html) you can just use glm::ortho to create a view matrix in the same way you would use glOrtho in old-style OpenGL.
EG:
glm::mat4 viewMatrix = glm::ortho(-5.0f, 5.0f, -5.0f, 5.0f, 0.0f, 1.0f);
If you then plug that into your shader you should get a mapping from -5 to +5 instead of -1 to +1, or whatever scale it is that you want.
when you use 'no' transformations in your project then you simply need to use coordinates that are placed in "normalized device space". That is actually a box ranging from -1 to 1 at each axis.
in that case in the vertex shader there will be line similar to this:
gl_Position = attribVertexPos; // no transformation
For 2D, if you want to provide coords from different range, all you need to do is simply scale position:
for range -10 to 10 use vertex shader with code gl_Position = attribVertexPos * 0.1
or you can look for a scaling matrix and use it as well
I am using glm maths library for the following problem: converting a 2d screen position into 3d world space.
In an attempt to track down the problem, I have simplified the code to the following:
float screenW = 800.0f;
float screenH = 600.0f;
glm::vec4 viewport = glm::vec4(0.0f, 0.0f, screenW, screenH);
glm::mat4 tmpView(1.0f);
glm::mat4 tmpProj = glm::perspective( 90.0f, screenW/screenH, 0.1f, 100000.0f);
glm::vec3 screenPos = glm::vec3(0.0f, 0.0f, 1.0f);
glm::vec3 worldPos = glm::unProject(screenPos, tmpView, tmpProj, viewport);
Now with the glm::unProject in this case I would expect worldPos to be (0, 0, 1). However it is coming through as (127100.12, -95325.094, -95325.094).
Am I misunderstanding what glm::unProject is supposed to do? I have traced through the function and it seems to be working OK.
The Z component in screenPos corresponds to the values in the depth buffer. So 0.0f is the near clip plane and 1.0f is the far clip plane.
If you want to find the world pos that is one unit away from the screen, you can rescale the vector:
worldPos = worldPos / (worldPos.z * -1.f);
Note also that the screenPos of 0,0 designates the bottom left corner of the screen, while in worldPos 0,0 is the center of the screen. So 0,0,1 should give you -1.3333,-1,-1, and 400,300,1 should give you 0,0,-1.