OpenGL 3.0 Window Collision Detection - c++

Can someone tell me how to make triangle vertices collide with edges of the screen?
For math library I am using GLM and for window creation and keyboard/mouse input I am using GLFW.
I created perspective matrix and simple array of triangle vertices.
Then I multiplied all this in vertex shader like:
gl_Position = projection * view * model * vec4(pos, 1.0);
Projection matrix is defined as:
glm::mat4 projection = glm::perspective(
45.0f, (GLfloat)screenWidth / (GLfloat)screenHeight, 0.1f, 100.0f);
I have fully working camera and projection. I can move around my "world" and see triangle standing there. The problem I have is I want to make sure that triangle collide with edges of the screen.
What I did was disable camera and only enable keyboard movement. Then I initialized translation matrix as glm::translate(model, glm::vec3(xMove, yMove, -2.5f)); and scale matrix to scale by 0.4.
Now all of that is working fine. When I press RIGHT triangle moves to the right when I press UP triangle moves up etc... The problem is I have no idea how to make it stop moving then it hits edges.
This is what I have tried:
triangleRightVertex.x is glm::vec3 object.
0.4 is scaling value that I used in scaling matrix.
if(((xMove + triangleRightVertex.x) * 0.4f) >= 1.0f)
{
cout << "Right side collision detected!" << endl;
}
When I move triangle to the right it does detect collision when x of the third vertex(bottom right corner of triangle) collides with right side but it goes little bit beyond before it detects. But when I tried moving up it detected collision when half of the triangle was up.
I have no idea what to do here can someone explain me this please?

Each of the vertex coordinates of the triangle is transformed by the model matrix form model space to world space, by the view matrix from world space to view space and by the projection matrix from view space to clip space. gl_Position is the Homogeneous coordinate in clip space and further transformed by a Perspective divide from clip space to normalized device space. The normalized device space is a cube, with right, bottom, front of (-1, -1, -1) and a left, top, back of (1, 1, 1).
All the geometry which is in this (volume) cube is "visible" on the viewport.
In clip space the clipping of the scene is performed.
A point is in clip space if the x, y and z components are in the range defined by the inverted w component and the w component of the homogeneous coordinates of the point:
-w <= x, y, z <= w
What you want to do is to check if a vertex x coordinate of the triangle is clipped. SO you have to check if the x component of the clip space coordinate is in the view volume.
Calculate the clip space position of the vertices on the CPU, as it does the vertex shader.
The glm library is very suitable for things like that:
glm::vec3 triangleVertex = ... ; // new model coordinate of the triangle
glm::vec4 h_pos = projection * view * model * vec4(triangleVertex, 1.0);
bool x_is_clipped = h_pos.x < -h_pos.w || h_pos.x > h_pos.w;
If you don't know how the orientation of the triangle is transformed by the model matrix and view matrix, then you have to do this for all the 3 vertex coordinates of the triangle-

Related

About view matrix and frustum culling

I was trying to determine if an object (sphere) is inside a view frustum. My strategy was first to get the view matrix:
glm::lookAt(cameraPosition, lookAtPosition, cameraUp);
Where cameraPosition is the position of the camera, lookAtPosition is calculate by cameraPosition + cameraDirection and cameraUp is pretty much self-explanatory.
After obtaining the view matrix, I calculate the 8 vertices of the frustum in the camera view, then multiply the inverse of the view matrix by each point to get the coordinate of them in the world space.
Using these 8 points in the world space, I can construct the 6 planes by using cross products (so their normals will point inward). Then I take the dot product of the object vector (I get the vector by subtracting the object position with an arbitrary point on that plane) with the normal vector of each plane, if it is positive for all 6, I know it's inside the frustum.
So my questions are:
Is the camera view is always set at (0,0,0) and looking at the positive z direction?
The view matrix transform the world view into the camera view, if I use the inverse of it to transform the camera view to the world view, is it correct?
My frustum is opposite with my cameraDirection which causes nothing to be seen on the screen (but it still drawing stuff in the back of my camera).
Is the camera view is always set at (0,0,0) and looking at the positive z direction?
The view space is the space which defines the look at the scene.
Which part is "seen" (is projected onto the viewport) is depends on the projection matrix. Generally (in OpenGL) the origin is (0,0,0) and the view space z axis points out of the viewport. However, the projection matrix inverts the z axis. It turns from a the Right-handed system of the view space to the left handed system of the normalized device space.
the inverse of it to transform the camera view to the world view
Yes.
The view matrix transforms from world space to view space. The projection matrix transform the Cartesian coordinates of the view space to the Homogeneous coordinates of the clipspace. Clipspace coordinates are transformed to normalized device space by dividing the xyz components by the w component (Perspective divide).
The normalized device space is a cube with a minimum of (-1, -1, -1) and a maximum of (1, 1, 1). So the 8 corner points of the cube are the corner of the viewing volume in normalized device space.
(-1, -1, -1) ( 1, -1, -1) ( 1, 1, -1) (-1, 1, -1) // near plane
(-1, -1, 1) ( 1, -1, 1) ( 1, 1, 1) (-1, 1, 1) // far plane
If you want to transform a point from normalized device space to view space, you've to:
transform by the inverse projection matrix
transform by the inverse view matrix
divide the xyz component of the result by its w component
glm::mat4 view; // view matrix
glm::mat4 projetion; // projection matrix
glm::vec3 p_ndc; // a point in normalized device space
glm::mat4 inv_projetion = glm::inverse( projetion );
glm::mat4 inv_view = glm::inverse( view );
glm::vec4 pth = inv_view * inv_projetion * glm::vec4( p_ndc, 1.0f );
glm::vec3 pt_world = glm::vec3( pth ) / pth.w;

How to calculate 3D point (World Coordinate) from 2D mouse clicked screen coordinate point in Openscenegraph?

I was trying to place a sphere on the 3D space from the user selected point on 2d screen space. For this iam trying to calculate 3d point from 2d point using the below technique and this technique not giving the correct solution.
mousePosition.x = ((clickPos.clientX - window.left) / control.width) * 2 - 1;
mousePosition.y = -((clickPos.clientY - window.top) / control.height) * 2 + 1;
then Iam multiplying the mousePositionwith Inverse of MVP matrix. But getting random number at result.
for calculating MVP Matrix :
osg::Matrix mvp = _camera->getViewMatrix() * _camera->getProjectionMatrix();
How can I proceed? Thanks.
Under the assumption that the mouse position is normalized in the range [-1, 1] for x and y, the following code will give you 2 points in world coordinates projected from your mouse coords: nearPoint is the point in 3D lying on the camera frustum near plane, farPointon the frustum far plane.
Than you can compute a line passing by these points and intersecting that with your plane.
// compute the matrix to unproject the mouse coords (in homogeneous space)
osg::Matrix VP = _camera->getViewMatrix() * _camera->getProjectionMatrix();
osg::Matrix inverseVP;
inverseVP.invert(VP);
// compute world near far
osg::Vec3 nearPoint(mousePosition.x, mousePosition.x, -1.0f);
osg::Vec3 farPoint(mousePosition.x, mousePosition.x, 1.0f);
osg::Vec3 nearPointWorld = nearPoint * inverseVP;
osg::Vec3 farPointWorld = farPoint * inverseVP;

OpenGL Perspective Projection pixel perfect drawing

The target is to draw a shape, lets say a triangle, pixel-perfect (vertices shall be specified in pixels) and be able to transform it in the 3rd dimension.
I've tried it with a orthogonal projection matrix and everything works fine, but the shape doesn't have any depth - if I rotate it around the Y axis it looks like I would just scale it around the X axis. (because a orthogonal projection obviously behaves like this). Now I want to try it with a perspective projection. But with this projection, the coordinate system changes completely, and due to this I can't specify my triangles verticies with pixels. Also if the size of my window changes, the size of shape changes too (because of the changed coordinate system).
Is there any way to change the coordinate system of the perspective projection so that I can specify my vertices like if I would use the orthogonal projection? Or do anyone have a Idea how to achieve the target described in the first sentence?
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates. The NDC are in range (-1,-1,-1) to (1,1,1).
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
Perspective Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
where:
aspect = w / h
tanFov = tan( fov_y * 0.5 );
prjMat[0][0] = 2*n/(r-l) = 1.0 / (tanFov * aspect)
prjMat[1][1] = 2*n/(t-b) = 1.0 / tanFov
I assume that the view matrix is the identity matrix, and thus the view space coordinates are equal to the world coordinates.
If you want to draw a polygon, where the vertex coordinates are translated 1:1 into pixels, then you have to draw the polygon in parallel plane to the viewport. This means all points have to be draw with the same depth. The depth has to choose that way, that the transformation of a point in normalized device coordinates, by the inverse projection matrix gives the vertex coordinates in pixel. Note, the homogeneous coordinates given by the transformation with the inverse projection matrix, have to be divided by the w component of the homogeneous coordinates, to get cartesian coordinates.
This means, that the depth of the plane depends on the field of view angle of the projection:
Assuming you set up a perspective projection like this:
float vp_w = .... // width of the viewport in pixel
float vp_h = .... // height of the viewport in pixel
float fov_y = ..... // field of view angle (y axis) of the view port in degrees < 180°
gluPerspective( fov_y, vp_w / vp_h, 1.0, vp_h*2.0f );
Then the depthZ of the plane with a 1:1 relation of vertex coordinates and pixels, will be calculated like this:
float angRad = fov_y * PI / 180.0;
float depthZ = -vp_h / (2.0 * tan( angRad / 2.0 ));
Note, the center point of the projection to the view port is (0,0), so the bottom left corner point of the plane is (-vp_w/2, -vp_h/2, depthZ) and the top right corner point is (vp_w/2, vp_h/2, depthZ). Ensure, that the near plane of the perspective projetion is less than -depthZ and the far plane is greater than -depthZ.
See further:
Both depth buffer and triangle face orientation are reversed in OpenGL
Transform the modelMatrix

Converting Screen 2D to World 3D Coordinates

I want to convert 2D screen coordinates to 3D world coordinates. I have searched a lot but I did not get any satisfying result.
Note: I am not using OpenGL nor any other graphics library.
Data which I have:
Screen X
Screen Y
Screen Height
Screen Width
Aspect Ratio
If you have the Camera world Matrix and Projection Matrix this is pretty simple.
If you don't have the world Matrix you can compute it from it's position and rotation.
worldMatrix = Translate(x, y, z) * RotateZ(z_angle) * RotateY(y_angle) * RotateX(x_angle);
Where translate returns the the 4x4 translation matrices and Rotate returns the 4x4 rotation matrices around the given axis.
The projection matrix can be calculated from the aspect ratio, field of view angle, and near and far planes.
This blog has a good explanation of how to calculate the projection matrix.
You can unproject the screen coordinates by doing:
mat = worldMatrix * inverse(ProjectionMatrix)
dir = transpose(mat) * <x_screen, y_screen, 0.5, 1>
dir /= mat[3] + mat[7] + mat[11] + mat[15]
dir -= camera.position
Your ray will point from the camera in the direction dir.
This should work, but it's not a super concreate example on how to do this.
Basically you just need to do the following steps:
calculate camera's worldMatrix
calculate camera's projection matrix
multiply worldMatrix with inverse projection matrix.
create a point <Screen_X_Value, Screen_Y_Value, SOME_POSITIVE_Z_VALUE, 1>
apply this "inverse" projection to your point.
then subtract the cameras position form this point.
The resulting vector is the direction from the camera. Any point along that ray are the 3D coordinates corresponding to your 2D screen coordinate.

Transform cube on to surface of sphere in openGL

I'm currently working on a game which renders a textured sphere (representing Earth) and cubes representing player models (which will be implemented later).
When a user clicks a point on the sphere, the cube is translated from the origin (0,0,0) (which is also the center of the sphere) to the point on the surface of the sphere.
The problem is that I want the cube to rotate so as to sit with it's base flat on the sphere's surface (as opposed to just translating the cube).
What the best way is to calculate the rotation matrices about each axis in order to achieve this effect?
This is the same calculation as you'd perform to make a "lookat" matrix.
In this form, you would use the normalised point on the sphere as one axis (often used as the 'Z' axis), and then make the other two as perpendicular vectors to that. Typically to do that you choose some arbitrary 'up' axis, which needs to not be parallel to your first axis, and then use two cross-products. First you cross 'Z' and 'Up' to make an 'X' axis, and then you cross the 'X' and 'Z' axes to make a 'Y' axis.
The X, Y, and Z axes (normalised) form a rotation matrix which will orient the cube to the surface normal of the sphere. Then just translate it to the surface point.
The basic idea in GL is this:
float x_axis[3];
float y_axis[3];
float z_axis[3]; // This is the point on sphere, normalised
x_axis = cross(z_axis, up);
normalise(x_axis);
y_axis = cross(z_axis, x_axis);
DrawSphere();
float mat[16] = {
x_axis[0],x_axis[1],x_axis[2],0,
y_axis[0],y_axis[1],y_axis[2],0,
z_axis[0],z_axis[1],z_axis[2],0,
(sphereRad + cubeSize) * z_axis[0], (sphereRad + cubeSize) * z_axis[1], (sphereRad + cubeSize) * z_axis[2], 1 };
glMultMatrixf(mat);
DrawCube();
Where z_axis[] is the normalised point on the sphere, x_axis[] is the normalised cross-product of that vector with the arbitrary 'up' vector, and y_axis[] is the normalised cross-product of the other two axes. sphereRad and cubeSize are the sizes of the sphere and cube - I'm assuming both shapes are centred on their local coordinate origin.