OpenGL Perspective Projection pixel perfect drawing - opengl

The target is to draw a shape, lets say a triangle, pixel-perfect (vertices shall be specified in pixels) and be able to transform it in the 3rd dimension.
I've tried it with a orthogonal projection matrix and everything works fine, but the shape doesn't have any depth - if I rotate it around the Y axis it looks like I would just scale it around the X axis. (because a orthogonal projection obviously behaves like this). Now I want to try it with a perspective projection. But with this projection, the coordinate system changes completely, and due to this I can't specify my triangles verticies with pixels. Also if the size of my window changes, the size of shape changes too (because of the changed coordinate system).
Is there any way to change the coordinate system of the perspective projection so that I can specify my vertices like if I would use the orthogonal projection? Or do anyone have a Idea how to achieve the target described in the first sentence?

The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1).
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
Perspective Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
where:
aspect = w / h
tanFov = tan( fov_y * 0.5 );
prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect)
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov
I assume that the view matrix is the identity matrix, and thus the view space coordinates are equal to the world coordinates.
If you want to draw a polygon, where the vertex coordinates are translated 1:1 into pixels, then you have to draw the polygon in parallel plane to the viewport. This means all points have to be draw with the same depth. The depth has to choose that way, that the transformation of a point in normalized device coordinates, by the inverse projection matrix gives the vertex coordinates in pixel. Note, the homogeneous coordinates given by the transformation with the inverse projection matrix, have to be divided by the w component of the homogeneous coordinates, to get cartesian coordinates.
This means, that the depth of the plane depends on the field of view angle of the projection:
Assuming you set up a perspective projection like this:
float vp_w = .... // width of the viewport in pixel
float vp_h = .... // height of the viewport in pixel
float fov_y = ..... // field of view angle (y axis) of the view port in degrees < 180°
gluPerspective( fov_y, vp_w / vp_h, 1.0, vp_h*2.0f );
Then the depthZ of the plane with a 1:1 relation of vertex coordinates and pixels, will be calculated like this:
float angRad = fov_y * PI / 180.0;
float depthZ = -vp_h / (2.0 * tan( angRad / 2.0 ));
Note, the center point of the projection to the view port is (0,0), so the bottom left corner point of the plane is (-vp_w/2, -vp_h/2, depthZ) and the top right corner point is (vp_w/2, vp_h/2, depthZ). Ensure, that the near plane of the perspective projetion is less than -depthZ and the far plane is greater than -depthZ.
See further:
Both depth buffer and triangle face orientation are reversed in OpenGL
Transform the modelMatrix

Related

Find ViewPort Rectangle in 3D world Space (Opengl)

I need help about the matrix Transformation to find the corners of my viewPort in 3d coordinates(World Space).
I have done some test but i cannot find the solution.
Step 1:
I have Projection and Model Matrix avaible (ViewPort size too). To find the center of my "Screen" i have used this:
OpenGL.UnProject(0.0, 0.0, 0.0) <- this function tell me where is the center of my screen in 3D space (Correct!)
Another approach is to multiply the Coordinate (0, 0, 0) * ProjectionMtx.Inverse.
Step 2:
Now i need for example the left top corner of my viewport, how can i find the 3D point in the world space?
Probably i should work with the viewport size but how?
This is my unproject method:
double[] mview = new double[16];
GetDouble(GL_MODELVIEW_MATRIX, mview);
double[] prj = new double[16];
GetDouble(GL_PROJECTION_MATRIX, prj);
int[] vp = new int[4];
GetInteger(GL_VIEWPORT, vp);
double[] r = new double[3];
gluUnProject(winx, winy, winz, mview, prj, vp, ref r[0], ref r[1], ref r[2]);
For Example:
if i have my camera in (-40,0,0) and my vieport [0,0,1258,513] and i unproject my near plane points i have this result:
left_bottom_near =>X=-39.7499881839701,Y=-0.0219584744091603,Z=0.946276352352364
right_bottom_near =>X=-39.7499881839701,Y=-0.0219584744091603,Z=0.946446903614738
left_top_near =>X=-39.7499881839701,Y=-0.0217879231516134,Z=0.946276352352364
right_top_near =>X=-39.7499881839701,Y=-0.0217879231516134,Z=0.946446903614738
I can understand the X value of my points that is ~ to my x world value of my camera position but, what about the Y & Z? I cannot understand.
gluUnProject converts from window coordinates to to world space (or model space).
The view matrix converts from world space to view space.
The projection matrix transforms from view space to normalized device space. The normalized device space is a cube with the left, bottom, near of (-1, -1, -1) and the right, top, far of (1, 1, 1).
Finally the xy coordinates in normalized device space are mapped to the viewport (window coordinates). The viewport is defined by glViewport. The z component is maped to the glDepthRange (defalt [0.0, 1.0]. gluUnProject does the opposite of all this steps.
At orthographic projection the viewing volume is a cuboid. At Perspective projection the viewing volume is a frustum. The projection matrix transforms from view space to normalized device space.
What you can see on the viewport is the projection of the normalized device space to its xy plane.
In general the corners of the viewing volume can be get by un-projecting the 8 corner points of the window coordinates. For that you have to know the viewport rectangle and the depth range.
I assume that the viwport has the size of the window (0, 0, widht, height) and that the depth range is [0.0, 1.0]:
left = 0.0;
right = width;
bottom = 0.0;
top = height;
left_bottom_near = gluUnProject(left, bottom, 0.0)
right_bottom_near = gluUnProject(right, bottom, 0.0)
left_top_near = gluUnProject(left, top, 0.0)
right_top_near = gluUnProject(right, top, 0.0)
left_bottom_far = gluUnProject(left, bottom, 1.0)
right_bottom_far = gluUnProject(right, bottom, 1.0)
left_top_far = gluUnProject(left, top, 1.0)
right_top_far = gluUnProject(right, top, 1.0)

Switching from perspective to orthogonal keeping the same view size of model and zooming

I have fov angle = 60, width = 640 and height = 480 of window, near = 0.01 and far = 100 planes and I get projection matrix using glm::perspective()
glm::perspective(glm::radians(fov),
width / height,
zNear,
zFar);
It works well.
Then I want to change projection type to orthogonal, but I don't knhow how to compute input parameters of glm::ortho() properly.
I've tried many ways, but problem is after switching to orthographic projection size of model object become another.
Let I have a cube with center in (0.5, 0.5, 0.5) and length size 1, and camera with mEye in (0.5, 0.5, 3), mTarget in (0.5, 0.5, 0.5) and mUp (0, 1, 0). View matrix is glm::lookAt(mEye, mTarget, mUp)
With perspective projection it works well. With glm::ortho(-width, width, -height, height, zNear, zFar) my cube became a small pixel in the center of window.
Also I've tried implement this variant How to switch between Perspective and Orthographic cameras keeping size of desired object
but result is (almost) same as before.
So, first question is how to compute ortho parameters for saving original view size of object/position of camera?
Also, zooming with
auto distance = glm::length(mTarget - mEye)
mEye = mTarget - glm::normalize(mTarget - mEye) * distance;
have no effect with ortho. Thus second question is how to implement zooming in case of ortho projection?
P.s.
I assume I understand ortho correctly. Proportions of model doesn't depends on depth, but nevertheless I still can decide where camera is for setting size of model properly and using zoom. Also I assume it is simple and trivial task, for example, when developing a 3D-viewer/editor/etc. Correct me if it is not.
how to compute ortho parameters for saving original view size of object/position of camera?
At orthographic projection the 3 dimensional scene is parallel projection to the 2 dimensional viewport.
This means that the objects, which are projected on the viewport always have the same size, independent of their depth (distance to the camera).
The perspective projection describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport.
This means an object which is projected on the viewport becomes smaller, by its depth.
If you switch form perspective to orthographic projection only the objects in 1 plane, which is planar (parallel) to the viepwort, and keeps its depth. Note, a plane is 2 dimensional and has no "depth". This cause that a 3 dimensional object never can "look" the same, when the projection is switched. But a 2 dimensional billboard can keep it's size.
The ration of depth an size at perspective projection is linear and can be calculated. It depends on the field of view angle only:
float ratio_size_per_depth = atan(glm::radians(fov / 2.0f) * 2.0f;
If you want to set up an orthographic projection, which keeps the size for a certain distance (depth) then you have to define the depth first:
e.g. Distance to the target point:
auto distance = glm::length(mTarget - mEye);
the projection can be set up like this:
float aspect = width / height
float size_y = ratio_size_per_depth * distance;
float size_x = ratio_size_per_depth * distance * aspect;
glm::mat4 orthProject = glm::ortho(-size_x, size_x, -size_y, size_y, 0.0f, 2.0f*distance);
how to implement zooming in case of ortho projection?
Scale the XY components of the orthographic projection:
glm::mat4 orthProject = glm::ortho(-size_x, size_x, -size_y, size_y, 0.0f, 2.0f*distance);
float orthScale = 2.0f;
orthProject = glm::scale(orthProject, glm::vec3(orthScale, orthScale, 1.0f));
Set a value for orthScale which is > 1.0 for zoom in and a value which is < 1.0 for zoom out.

OpenGL 3.0 Window Collision Detection

Can someone tell me how to make triangle vertices collide with edges of the screen?
For math library I am using GLM and for window creation and keyboard/mouse input I am using GLFW.
I created perspective matrix and simple array of triangle vertices.
Then I multiplied all this in vertex shader like:
gl_Position = projection * view * model * vec4(pos, 1.0);
Projection matrix is defined as:
glm::mat4 projection = glm::perspective(
45.0f, (GLfloat)screenWidth / (GLfloat)screenHeight, 0.1f, 100.0f);
I have fully working camera and projection. I can move around my "world" and see triangle standing there. The problem I have is I want to make sure that triangle collide with edges of the screen.
What I did was disable camera and only enable keyboard movement. Then I initialized translation matrix as glm::translate(model, glm::vec3(xMove, yMove, -2.5f)); and scale matrix to scale by 0.4.
Now all of that is working fine. When I press RIGHT triangle moves to the right when I press UP triangle moves up etc... The problem is I have no idea how to make it stop moving then it hits edges.
This is what I have tried:
triangleRightVertex.x is glm::vec3 object.
0.4 is scaling value that I used in scaling matrix.
if(((xMove + triangleRightVertex.x) * 0.4f) >= 1.0f)
{
cout << "Right side collision detected!" << endl;
}
When I move triangle to the right it does detect collision when x of the third vertex(bottom right corner of triangle) collides with right side but it goes little bit beyond before it detects. But when I tried moving up it detected collision when half of the triangle was up.
I have no idea what to do here can someone explain me this please?
Each of the vertex coordinates of the triangle is transformed by the model matrix form model space to world space, by the view matrix from world space to view space and by the projection matrix from view space to clip space. gl_Position is the Homogeneous coordinate in clip space and further transformed by a Perspective divide from clip space to normalized device space. The normalized device space is a cube, with right, bottom, front of (-1, -1, -1) and a left, top, back of (1, 1, 1).
All the geometry which is in this (volume) cube is "visible" on the viewport.
In clip space the clipping of the scene is performed.
A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:
-w <= x, y, z <= w
What you want to do is to check if a vertex x coordinate of the triangle is clipped. SO you have to check if the x component of the clip space coordinate is in the view volume.
Calculate the clip space position of the vertices on the CPU, as it does the vertex shader.
The glm library is very suitable for things like that:
glm::vec3 triangleVertex = ... ; // new model coordinate of the triangle
glm::vec4 h_pos = projection * view * model * vec4(triangleVertex, 1.0);
bool x_is_clipped = h_pos.x < -h_pos.w || h_pos.x > h_pos.w;
If you don't know how the orientation of the triangle is transformed by the model matrix and view matrix, then you have to do this for all the 3 vertex coordinates of the triangle-

glReadPixels() how to get actual depth instead of normalized values?

I'm using pyopengl to get a depth map.
I am able to get a normalized depth map using glReadPixels(). How can I revert the normalized values to the actual depth in world coordinates?
I've tried playing with glDepthRange(), but it always performs some normalization. Can I disable the normalization at all?
When you draw your geometry, your vertex shader is supposed to transform everything into normalized device coordinates (where each component is between -1 and 1) via the view/projection matrix. There is no way to avoid it, everything outside of this range will get clipped (or clamped, if you enable depth clamping). Then, these device coordinates are transformed into window coordinates - X and Y coordinates are mapped into range specified with glViewport and Z into range set with glDepthRange.
You can't disable normalization, because the final values are required to be in 0..1 range. But you can apply the reverse transformation: first, map your depth values back to -1..1 range (if you didn't use glDepthRange, all you have to do is multiply them by 2 and subtract 1). Then, you need to apply the inverse of your projection matrix - you can either do that explicitly by calculating its inverse, or avoid matrix operations by looking into how your perspective matrix is calculated. For a typical matrix, the inverse transform will be
zNorm = 2 * zBuffer - 1
zView = 2 * near * far / ((far - near) * zNorm - near - far)
(Note that zView will be negative, between -near and -far, because in OpenGL your Z axis normally points towards the camera).
Although normally you don't want only depth - you want the full 3D points, so you might as well reconstruct the vector in normalized coordinates and then apply the inverse projection/view transform.
After the projection to the viewport, the coordinates of the scene are normalized device coordinates (NDC). The normalized device space is a cube, with the left, bottom, front coordinate of (-1, -1, -1) and the right, top, back coordinate of (1, 1, 1). The geometry in this cube is "visible" on the viewport (unless it is covered).
The Z coordinate of the normalized device space, is mapped to the depth range (glDepthRange), which is general in [0, 1].
How the z-coordinate of the view space is transformed to a normalized device Z-coordinate and further a depth, depends on the projection matrix.
While at Orthographic Projection, the Z component is calculated by the linear function, at Perspective Projection, the Z component is calculated by the rational function.
See How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?.
This means , to convert form the depth of the depth buffer to the original Z-coordinate, the projection (Orthographic or Perspective), and the near plane and far plane has to be known.
In the following is assumed that the depth range is in [0, 1] and depth is a value in this range:
Orthographic Projection
n = near, f = far
z_eye = depth * (f-n) + n;
z_linear = z_eye
Perspective Projection
n = near, f = far
z_ndc = 2 * depth - 1.0;
z_eye = 2 * n * f / (f + n - z_ndc * (f - n));
If the perspective projection matrix is known this can be done as follows:
A = prj_mat[2][2]
B = prj_mat[3][2]
z_eye = B / (A + z_ndc)
Note, in any case transformation by the inverse projection matrix, would transform a normalized device coordinate to a coordinate in view space.

How to calculate 3D point (World Coordinate) from 2D mouse clicked screen coordinate point in Openscenegraph?

I was trying to place a sphere on the 3D space from the user selected point on 2d screen space. For this iam trying to calculate 3d point from 2d point using the below technique and this technique not giving the correct solution.
mousePosition.x = ((clickPos.clientX - window.left) / control.width) * 2 - 1;
mousePosition.y = -((clickPos.clientY - window.top) / control.height) * 2 + 1;
then Iam multiplying the mousePositionwith Inverse of MVP matrix. But getting random number at result.
for calculating MVP Matrix :
osg::Matrix mvp = _camera->getViewMatrix() * _camera->getProjectionMatrix();
How can I proceed? Thanks.
Under the assumption that the mouse position is normalized in the range [-1, 1] for x and y, the following code will give you 2 points in world coordinates projected from your mouse coords: nearPoint is the point in 3D lying on the camera frustum near plane, farPointon the frustum far plane.
Than you can compute a line passing by these points and intersecting that with your plane.
// compute the matrix to unproject the mouse coords (in homogeneous space)
osg::Matrix VP = _camera->getViewMatrix() * _camera->getProjectionMatrix();
osg::Matrix inverseVP;
inverseVP.invert(VP);
// compute world near far
osg::Vec3 nearPoint(mousePosition.x, mousePosition.x, -1.0f);
osg::Vec3 farPoint(mousePosition.x, mousePosition.x, 1.0f);
osg::Vec3 nearPointWorld = nearPoint * inverseVP;
osg::Vec3 farPointWorld = farPoint * inverseVP;