Recently I need to use orthographic projection using glm library. But with orthographic projection my scene is not rendering at the center of my viewport.
My Scene is simply a cube, It was rendering well by using glm::perspective. I don't understand too much mathematical stuff, I just using glm::ortho function.
So how do I need to do to correctly setup the orthographic projection?
Here is the code I did:
mat4 projection=ortho(0.0f, 800.0f, 600.0f, 0.0f,-1000.0f, 1000.0f);
mat4 view=lookAt(vec3(0,0,1),vec3(0,0,0),vec3(0,1,0));
mat4 model=mat4();
Then I sent these three matrices to shader which is same as perspective projection did.It should be a quad in the center of my screen, but in my program it is lied on the top left corner of the screen,turns out like the a quarter.
Your cube appears in the top-left corner of the screen because that's the origin (0,0,0) of the coordinate space specified by your orthographic projection.
With your previous perspective projection you probably had an origin at the center of the screen. You can get back to that by changing the values in your orthographic projection:
ortho(-(800.0f / 2.0f), 800.0f / 2.0f,
600.0f / 2.0f, -(600.0f / 2.0f),
-1000.0f, 1000.0f);
Related
I'm having a hard time figuring out what's going on with my texture:
Basically I am fetching a webcam stream as my underlying 2d texture canvas in OpenGL, and in my paintGL() I'm drawing stuff on it (as RGBA images with GL_BLEND).
Since I'm using a Kinect as a data source, I'm also getting the depth values from a tracked skeleton (a person), and converting them into GL values (XYZ varying between 0.0f and 1.0f).
So my goal is that, for instance, a loaded 2D Texture like a shirt, is now properly tracking the person in my RGB output display. But it seems my understanding of orthographic projection is wrong:
I'm constantly loading the 4 converted vertices into a VBO, but whenever I put the texture on top of this dynamic quad, it's always facing the screen.
I thought that putting this dynamic quad between the "background" canvas and the camera would result in a proper projection of the quad onto the canvas, which would give me the impression of a warping 2D texture, that seems to "bend" whenever the person rotates.
But the texture is always facing the camera and doesnt rotate.
I've also tried to manually rotate via a matrix and set that into my shader, but again, it only rotates the vertice quad itself (as: rotation simply makes the texture smaller) , and THEN puts the texture on top, instead of rotating the texture with it.
So, is it somehow possible to properly apply this to the texture?
I've thought about mixing a perspective projection in, but actually have no idea how to implement this...
EDIT:
I've actually already set my projection matrix up like the following:
In resizeGL():
projection.setToIdentity();
projection.ortho(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
projection.translate(0.0f, 0.0f, 3.0f);
In paintGL():
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST); // turning this on/off makes no difference
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, &textureID);
program.setUniformValue("mvp_matrix", projection);
program.setUniformValue("texture", 0);
//draw 2d background quad
drawQuad();
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// switch to frustum to give perspective view
projection.setToIdentity();
projection.frustum(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
projection.translate(0.0f, 0.0f, 3.0f);
// bind cloth texture and draw ontop 2d quad
clothTexture->bind();
program.setUniformValue("mpv_matrix", projection);
drawShirtQuad();
// reset to ortho view
projection.setToIdentity();
projection.ortho(0.0f, 1.0f, 0.0f, 1.0f, 2.0f, -5.0f);
// release texture
clothTexture->release();
glDisable(GL_BLEND);
clothTexture is a QOpenGLTexture that has successfully loaded an RGBA image from a file.
Result: whenever I activate the frustum perspective, it results in a black screen. I think everything is correctly set up: POV is traversed towards positive z-axis in resizeGL(), and all the cloth vertices vary between 0 and 1 in XYZ, while the background is positioned at:
(0.0f, 0.0f, -1.0f), (1.0f, 0.0f, -1.0f), (1.0f, 1.0f, -1.0f), (0.0f, 1.0f, -1.0f).
So the cloth object is always positioned between background plane and POV. Am i missing something in the frustum setup ? I've simply set it up the same way as ortho...
EDIT:
Sorry for not mentiong; the matrix I'm using is a QMatrix4x4 type:
Frustum
These functions multiply the current matrix with the one you define as an argument, which should yield the same result as if I define a View matrix for instance, and then define my shader uniform "mvp_matrix" as projection * view, if I'm not mistaken. Maybe something like lookAt will do the trick; I'll just try messing around more. :)
You need to use a perspective projection to achieve desired result. Look here for example code for perspective projection matrix creation with glm.
Moving vertices wouldn't be needed as you will get proper positions with rotation applied in your model matrix.
EDIT: in your code where can i look at .frustum and .translate methods or from what library projection object is? It doesn't look like you are doing Projection * View by moving frustum matrix. Some info about roles of standard matrices.
Considering debugging if you get on screen black color instead of GL_COLOR_BUFFER_BIT color problem is not with matrix but earlier. Also i recommend to console.log your perspective matrix and compare it to correct one (which you can get for example in glm library).
I'm currently facing some perspective issues when trying to render the axes of a coordinate system into my scene. For these axes I draw three orthogonal lines that go through the center of my 3D cube.
It's pretty tough to explain what the problem is, so I guess the most demonstrative way of presenting it is to post some pictures.
1) view on the whole scene: click here
2) zoomed in view on the origin of the coordinate system: click here
3) When I zoom in a tiny little bit further, two of the axes disappear and the other one seems to be displaced for some reason: click here
Why does this happen and how can I prevent it?
My modelview and projection matrices look the following:
// Set ProjectionMatrix
projectionMatrix = glm::perspective(90.0f, (GLfloat)width / (GLfloat) height, 0.0001f, 1000.f);
glBindBuffer(GL_UNIFORM_BUFFER, globalMatricesUBO);
glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(glm::mat4), glm::value_ptr(projectionMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
// Set ModelViewMatrix
glm::mat4 identity = glm::mat4(1.0); // Start with the identity as the transformation matrix
glm::mat4 pointTranslateZ = glm::translate(identity, glm::vec3(0.0f, 0.0f, -translate_z)); // Zoom in or out by translating in z-direction based on user input
glm::mat4 viewRotateX = glm::rotate(pointTranslateZ, rotate_x, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the whole szene in x-direction based on user input
glm::mat4 viewRotateY = glm::rotate(viewRotateX, rotate_y, glm::vec3(0.0f, 1.0f, 0.0f)); // Rotate the whole szene in y-direction based on user input
glm::mat4 pointRotateX = glm::rotate(viewRotateY, -90.0f, glm::vec3(1.0f, 0.0f, 0.0f)); // Rotate the camera by 90 degrees in negative x-direction to get a frontal look on the szene
glm::mat4 viewTranslate = glm::translate(pointRotateX, glm::vec3(-dimensionX/2.0f, -dimensionY/2.0f, -dimensionZ/2.0f)); // Translate the origin to be the center of the cube
That's called "clipping". The axis is hitting the near-clip plane and thus is being clipped. The third axis is not "displaced"; it is simply partially clipped. Take your second image and cover up most of it, so that you only see part of the diagonal axis; that's what you're getting.
There are a few general solutions to this. First, you could just not allow the user to zoom in that far. Or you could adjust the near clip plane inward as the camera is moved closer to the target object. This will also cause precision problems for far away objects, so you'll probably want to adjust your far clip plane inward too.
Alternatively, you can just turn on depth clamping (assuming you have GL 3.x+, or access to ARB_depth_clamp or NV_depth_clamp). This isn't a perfect solution, as things will still be clipped when they get behind the camera. And things that intersect the near clip plane will no longer have proper depth buffering if two such objects overlap. But it's generally good enough.
I am currently working on a little toy program with OpenGL which shows a scene in clip-space view, i.e. it draws a cube to visualize the canonical view volume and inside the cube, the projectively transformed model is drawn. To show a code snippet for the model drawing:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(1.0f, 1.0f, -1.0f);
glMultMatrixd(projectionMat);
glMultMatrixd(modelviewMat);
glEnable(GL_LIGHTING);
draw_model();
glDisable(GL_LIGHTING);
So, naturally, the drawn model is "distorted" (which is the desired behaviour). However, the lighting is wrong, as the surface normals are also transformed by the projection matrix and, thus, are not orthogonal to their surfaces after transform. What I am trying to accomplish is lighting that is "correct" in the sense that the surfaces of the distorted models have correct normals.
The question is - how can I do that? I was playing with the usual transposed-inverse-matrix rule for normals, but as far as I understand, that's what OGL does with its normals by default. I think I would have to recalculate the surface normals AFTER the surfaces are transformed with the modelview matrix, but how to do that? Or is there another way?
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(1.0f, 1.0f, -1.0f);
glMultMatrixd(projectionMat);
The projection matrix goes into glMatrixMode(GL_PROJECTION);. Transforming the normals happens with the inverse transpose of the modelview. If there's a projection component in the modelview it messes up your normal transformation.
The correct code would be
glMatrixMode(GL_PROJECTION);
glLoadMatrixd(projectionMat);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(1.0f, 1.0f, -1.0f);
glMultMatrixd(modelviewMat);
glEnable(GL_LIGHTING);
draw_model();
glDisable(GL_LIGHTING);
If you're using fixed-function, you must put all of this in your projection matrix. Including the scale, translation, and rotation that happens after the projection:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1.0f, 1.0f, -1.0f);
glMultMatrixd(projectionMat);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixd(modelviewMat);
glEnable(GL_LIGHTING);
draw_model();
glDisable(GL_LIGHTING);
This works because the positions (ie: what you see) are transformed by both the projection and modelview matrices, but the fixed-function lighting is done only in view space (ie: after modelview but before projection).
In fact, this is exactly why fixed-function GL has a distinction between the two matrices.
I have just gotten into implementing skyboxes and am doing so with OpenGL/GLSL and GLM as my math library. I assume the problem is matrix related and I haven't been able to find an implementation that utilizes the GLM library:
The model for the skybox loads just fine, the camera however circles it as if it is rotating around it in 3d third person camera.
For my skybox matrix, I am updating it every time my camera updates. Because I use glm::lookAt, it is essentially created the same way as my view matrix except I use 0, 0, 0 for the direction.
Here is my view matrix creation. It works fine in rendering of objects and geometry:
direction = glm::vec3(cos(anglePitch) * sin(angleYaw), sin(anglePitch), cos(anglePitch) * cos(angleYaw));
right = glm::vec3(sin(angleYaw - 3.14f/2.0f), 0, cos(angleYaw - 3.14f/2.0f));
up = glm::cross(right, direction);
glm::mat4 viewMatrix = glm::lookAt(position, position+direction, up);
Similarly, my sky matrix is created in the same way with only one change:
glm::vec3 position = glm::vec3(0.0f, 0.0f, 0.0f);
glm::mat4 skyView = glm::lookAt(position, position + direction, up);
I know a skybox does not apply translation and only considers rotations so I am not sure what the issue is. Is there an easier way to do this?
Visual aids:
Straight on without any movement yet
When I rotate the camera:
My question is this: how do I set up the correct matrix for rendering a skybox using glm:lookAt?
Aesthete is right skybox/skydome is only object that means you do not change projection matrix !!!
your render should be something like this:
clear screen/buffers
set camera
set modelview to identity and then translate it to position of camera you can get the position directly from projection matrix (if my memory serves at array positions 12,13,14) to obtain the matrix see this https://stackoverflow.com/a/18039707/2521214
draw skybox/skydome (do not cross your z_far plane,or disable Depth Test)
optionaly clear Z-buffer or re-enable Depth Test
draw your scene stuf .... ( do not forget to set modelview matrix for each of your drawed model)
of course you can temporary set your camera position (projection matrix) to (0,0,0) and leave modelview matrix with identity, it is sometimes more precise approach but do not forget to set the camera position back after skybox draw.
hope it helps.
I use glm::perspective(80.0f, 4.0f/3.0f, 1.0f, 120.0f); and multiply it by
glm::mat4 view = glm::lookAt(
glm::vec3(0.0f, 0.0f, 60.5f),
glm::vec3(0.0f, 0.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f)
);
My question touches the subject of OpenGL and Maths. It relates to drawing GUI on my viewport. I do not know how to get proper coordinates in order to draw, e.g. a square that covers ΒΌ of the window. If I don't use perspectives and glm::lookAt(...) (matrix indentity), I will be able to draw my GUI by setting coords from X,Y in <-1.0, 1.0>. And when I put a vertex on (-1.0, -1.0), it will be localized at the bottom left corner of the window.
How to gain the same effect using perspective and lookAt?
Don't try to fiddle things into one certain projection. Just switch your projection to something that better suits your GUI drawing needs. OpenGL is a state machine, and it's perfectly normal to switch the parameters multiple times throughout rendering a single image.