I'm implementing a 3D maze game in C++, using OpenGL where the camera is at a fixed position exactly above the middle of the maze looking at the middle too. It works and looks just as I've wanted but the my code is not so nice because I needed to put a + 0.01f into the position to work well. If I miss it out, the labirynth doesn't show up, it seems like the camera points into the opposite direction.. How can I fix it elegantly?
glm::vec3 FrontDirection = glm::vec3(((GLfloat)MazeSize / 2) + 0.01f, 0.0f, ((GLfloat)MazeSize / 2)
The (0,0,0) origo location is the left corner of the maze (start position) so this is the reason I set the position like this:
glm::vec3 CameraPosition = glm::vec3(((GLfloat)MazeSize / 2), ((GLfloat)MazeSize + 5.0f), ((GLfloat)MazeSize / 2))
The UpDirection is (0.0f, 1.0f, 0.0f) lookAt function looks like this:
Matrix = glm::lookAt(CameraPosition, FrontDirection, UpDirection);
I know that by convention, in OpenGL the camera points towards the negative z-axis so the FrontDirection is basically pointing in the reverse direction of what it is targeting. For me it would be completely clear to set the positions like I did above but it still is not working as I've expected. (unless I put that + 0.01f)
Thank you for your answer in advance!
I know that by convention, in OpenGL the camera points towards the negative z-axis
In viewspace the z-axis points out of the viewport, but that is completely irrelevant. You define the view matrix, the position and the orientation of the camera. The camera position is the first argument of glm::lookAt. The camera looks in the direction of the target, the 2nd argument of glm::lookAt.
The 2nd parameter of glm::lookAt is not a direction vector, it is a point on the line of sight.
Compute a point on the line of sight by CameraPosition+FrontDirection:
Matrix = glm::lookAt(CameraPosition, CameraPosition+FrontDirection, UpDirection);
Your upwards-vector has the same direction as the line of sight. The upwards should be orthogonal to the line of sight. The up-vector defines the roll:
glm::vec3 CameraPosition = glm::vec3(
((GLfloat)MazeSize / 2), ((GLfloat)MazeSize + 5.0f), ((GLfloat)MazeSize / 2));
glm::vec3 CameraTarget = glm::vec3(
((GLfloat)MazeSize / 2), 0.0f, ((GLfloat)MazeSize / 2));
glm::vec3 UpDirection = glm::vec3(0.0f, 0.0f, 1.0f);
Matrix = glm::lookAt(CameraPosition, CameraTarget, UpDirection);
I'm trying to change my camera projection from perspective to orthographic.
At the moment my code is working fine with the perspective projection
m_prespective = glm::perspective(70.0f, (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_prespective * glm::lookAt(m_position, m_forward, m_up);
But as soon as i change it to orthographic projection I can't see my mesh anymore.
m_ortho = glm::ortho(0.0f, (float)DISPLAY_WIDTH, (float)DISPLAY_HEIGHT,5.0f, 0.01f, 1000.0f);
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -mesh.radius);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_ortho * glm::lookAt(m_position, m_forward, m_up);
I don't understand what I'm doing wrong.
In perspective projection the term (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT is evaluating the picture aspect ratio. This number is going to be close to 1. The left and right clip plane distances at the near plane for perspective projection is aspect * near_distance. More interesting though is the expanse of left-right at the viewing distance, which in your case is abs(m_position.z)= abs(mesh.radius).
Carrying this over to orthographic projection the left, right, top and bottom clip plane distances should be of the same order of magnitude, so given that aspect is close to 1 the values for left, right, bottom and top should be close to the value of abs(mesh.radius). The resolution of the display in pixels is totally irrelevant except for the aspect ratio.
Furthermore when using a perspective projection the value for near should be chosen as large as possible so that all desired geometry is visible. Doing otherwise will waste precious depth buffer resolution.
float const view_distance = mesh.radius + 1;
float const aspect = (float)DISPLAY_WIDTH / (float)DISPLAY_HEIGHT;
switch( typeof_projection ){
case perspective:
m_projection = glm::perspective(70.0f, aspect, 1.f, 1000.0f);
break;
case ortho:
m_projection = glm::ortho(
-aspect * view_distance,
aspect * view_distance,
view_distance,
view_distance,
-1000, 1000 );
break;
}
m_position = glm::vec3(mesh.centre.x, mesh.centre.y, -view_distance);
m_forward = centre;
m_up = glm::vec3(0.0f, 1.0f, 0.0f);
return m_projection * glm::lookAt(m_position, m_forward, m_up);
I am trying to implement a FPS camera using C++, OpenGL and GLM.
What I did until now:
I have a cameraPosition vector for the camera position, and also
cameraForward (pointing to where the camera looks at), cameraRight and cameraUp, which are calculated like this:
inline void controlCamera(GLFWwindow* currentWindow, const float& mouseSpeed, const float& deltaTime)
{
double mousePositionX, mousePositionY;
glfwGetCursorPos(currentWindow, &mousePositionX, &mousePositionY);
int windowWidth, windowHeight;
glfwGetWindowSize(currentWindow, &windowWidth, &windowHeight);
m_cameraYaw += (windowWidth / 2 - mousePositionX) * mouseSpeed;
m_cameraPitch += (windowHeight / 2 - mousePositionY) * mouseSpeed;
lockCamera();
glfwSetCursorPos(currentWindow, windowWidth / 2, windowHeight / 2);
// Rotate the forward vector horizontally. (the first argument is the default forward vector)
m_cameraForward = rotate(vec3(0.0f, 0.0f, -1.0f), m_cameraYaw, vec3(0.0f, 1.0f, 0.0f));
// Rotate the forward vector vertically.
m_cameraForward = rotate(m_cameraForward, -m_cameraPitch, vec3(1.0f, 0.0f, 0.0f));
// Calculate the right vector. First argument is the default right vector.
m_cameraRight = rotate(vec3(1.0, 0.0, 0.0), m_cameraYaw, vec3(0.0f, 1.0f, 0.0f));
// Calculate the up vector.
m_cameraUp = cross(m_cameraRight, m_cameraForward);
}
Then I "look at" like this:
lookAt(m_cameraPosition, m_cameraPosition + m_cameraForward, m_cameraUp)
The problem: I seem to be missing something, because my FPS camera works as it is supposed to be until I move forward and get behind Z(0.0) (z becomes negative).. then my vertical mouse look flips and when I try to look up my application looks down...
The same question was asked here: glm::lookAt vertical camera flips when z <= 0 , but I didn't understand what the issue is and how to solve it.
EDIT: The problem is definitely in the forward, up and right vectors. When I calculate them like this:
m_cameraForward = vec3(
cos(m_cameraPitch) * sin(m_cameraYaw),
sin(m_cameraPitch),
cos(m_cameraPitch) * cos(m_cameraYaw)
);
m_cameraRight = vec3(
sin(m_cameraYaw - 3.14f/2.0f),
0,
cos(m_cameraYaw - 3.14f/2.0f)
);
m_cameraUp = glm::cross(m_cameraRight, m_cameraForward);
Then the problem goes away, but then m_cameraPitch and m_cameraYaw don't match... I mean if m_cameraYaw is 250 and I make a 180 flip m_cameraYaw is 265... I can't restrict leaning backwards for example like that? Any ideas?
I understand the basic principles of bounding sphere collision however implementing it is confusing me a little.
if I have a two cubes defined within arrays: cube1[] and cube2[] with each array consisting of GLfloats that make up each triangle. How then can I first calculate the center point of each cube? and how would I get the radius of the sphere around this?
What mathematics are needed to calculate this?
EDIT: To give more clarification on my question. Assume I have a cube defined using the following array:
GLfloat cube[] = {
2.0f, 3.0f, -4.0f, // triangle 1, top right
3.0f, 3.0f, -4.0f,
2.0f, 2.0f, -4.0f, // bottom right
3.0f, 3.0f, -4.0f, // triangle 2, back face top left
3.0f, 2.0f, -4.0f, // bottom left
2.0f, 2.0f, -4.0f,
2.0f, 3.0f, -3.0f, // triangle 1, front face top left
2.0f, 2.0f, -3.0f, // bottom left
3.0f, 3.0f, -3.0f, // Bottom right
3.0f, 3.0f, -3.0f, // triangle 2, front face
2.0f, 2.0f, -3.0f,
3.0f, 2.0f, -3.0f, // Bottom right
2.0f, 3.0f, -3.0f, // triangle 1, top face
3.0f, 3.0f, -3.0f,
2.0f, 3.0f, -4.0f,
3.0f, 3.0f, -4.0f, // triangle 2, top face
2.0f, 3.0f, -4.0f,
3.0f, 3.0f, -3.0f,
2.0f, 2.0f, -3.0f, // triangle 1, bottom face
2.0f, 2.0f, -4.0f,
3.0f, 2.0f, -3.0f,
3.0f, 2.0f, -4.0f, // triangle 2, bottom face
3.0f, 2.0f, -3.0f, // Bottom Right.
2.0f, 2.0f, -4.0f,
2.0f, 2.0f, -4.0f, // triangle 1, left face
2.0f, 2.0f, -3.0f,
2.0f, 3.0f, -4.0f,
2.0f, 3.0f, -4.0f, // triangle 2, left face
2.0f, 2.0f, -3.0f,
2.0f, 3.0f, -3.0f,
3.0f, 2.0f, -4.0f, // triangle 1, right face
3.0f, 3.0f, -4.0f,
3.0f, 2.0f, -3.0f,
3.0f, 3.0f, -4.0f, // triangle 2, right face
3.0f, 3.0f, -3.0f,
3.0f, 2.0f, -3.0f,
};
Given this cube, I need to get the center point and keep track of it every time the cube translates. I believe I have done so but assistance on whether this is correct is also appreciated:
// Calculate initial center of the shape
glm::vec3 corner1 = glm::vec3(2.0f, 3.0f, -4.0f);
glm::vec3 corner2 = glm::vec3(2.0f, 2.0f, -4.0f);
glm::vec3 corner3 = glm::vec3(3.0f, 3.0f, -4.0f);
glm::vec3 corner4 = glm::vec3(3.0f, 2.0f, -4.0f);
glm::vec3 corner5 = glm::vec3(2.0f, 3.0f, -3.0f);
glm::vec3 corner6 = glm::vec3(2.0f, 2.0f, -3.0f);
glm::vec3 corner7 = glm::vec3(3.0f, 3.0f, -3.0f);
glm::vec3 corner8 = glm::vec3(3.0f, 2.0f, -3.0f);
GLfloat x = (corner1.x + corner2.x + corner3.x + corner4.x + corner5.x + corner6.x+ corner7.x + corner8.x)/8;
GLfloat y = (corner1.y + corner2.y + corner3.y + corner4.y + corner5.y + corner6.y+ corner7.y + corner8.y)/8;
GLfloat z = (corner1.z + corner2.z + corner3.z + corner4.z + corner5.z + corner6.z+ corner7.z + corner8.z)/8;
center = glm::vec4(x, y, z, 1.0f);
Translation is kept in check with the following function:
void Cube::Translate(double x, double y, double z)
{
// Translation matrix for cube.
glm::mat4 cubeTransMatrix = glm::mat4();
cubeTransMatrix = glm::translate(cubeTransMatrix, glm::vec3(x, y, z));
//center = cubeTransMatrix * center;
//Move the cube
for(int i = 0; i < sizeof(cube) / sizeof(GLfloat); i+=3){
glm::vec4 vector = glm::vec4(cube[i], cube[i+1], cube[i+2], 1.0f);
glm::vec4 translate = cubeTransMatrix*vector;
glm::vec4 translateCenter = cubeTransMatrix*center;
center.x = translateCenter[0];
center.y = translateCenter[1];
center.z = translateCenter[2];
cube[i] = translate[0];
cube[i+1] = translate[1];
cube[i+2] = translate[2];
}
}
The center-point of a shape can be calculated in many ways, depending on what you want to consider the "center." However, for a cube, the center calculation is generally considered to be the mean of its points, which is relatively simple: Just get the mean of all the corners' coordinates by adding up all the vectors and dividing by 8. Depending on the exact mesh of your cube, you may have more vertices than that, but for a simple cube, this shouldn't be the case.
If you don't have access to the vertices themselves (you loaded up a mesh, or are using the default cube, built in to GLUT, or something), you will need to keep track of transformations for that cube. I might suggest using a "local" position vector or a local transformation matrix for each cube.
With OpenGL, matrices should be column major, so the top 3 values in the right most column should be your location, in world coordinates, after any global transformations have taken place.
Detecting a collision is almost easier (once you get past the whole "predicting when the collision is going to take place" part, which I wouldn't worry about for your first implementation, if I were you). Spheres are simple shapes, and detecting if two spheres intersect is even simpler. All you need to do is find the squared distance between the two sphere colliders, and compare that to their squared radii.
If the sum of the two squared radii is greater than the distance between the two spheres, then they intersect. Otherwise, they do not.
Just to illustrate how simple this calculation really is, I'll show you here:
float r0sqr = sphere0.radius * sphere0.radius;
float r1sqr = sphere1.radius * sphere1.radius;
float distX = sphere0.position.x - sphere1.position.x;
float distY = sphere0.position.y - sphere1.position.y;
float distZ = sphere0.position.z - sphere1.position.z;
// Since we already need to square these, we won't need to take the absolute value
// to accurately compare them to the radii
distX *= distX;
distY *= distY;
distZ *= distZ;
float sqrDist = (distX+distY+distZ)
if((r0sqr + r1sqr) > sqrDist)
{
// They intersect
}
else
{
// They do not intersect
}
Once you've detected the collision, assuming you want the spheres to be rigidbody colliders, moving them away from one another is incredibly simple. Simply take the intersection distance of the two spheres. For the sake of efficiency, we should modify our previous code a bit, however:
// Since we already need to square these, we won't need to take the absolute value
// to accurately compare them to the radii
float distSqrX = distX * distX;
float distSqrY = distY * distY;
float distSqrZ = distZ * distZ;
float sqrDist = (distSqrX+distSqrY+distSqrZ);
Once we've done that, we can get to calculating the rest of the resolution for this collision. We'll be doing it in a very simple way (assuming neither object has mass, and there is no impact to calculate).
float totalRadius = sphere0.radius + sphere1.radius;// the sum of the two spheres' radii
float dist = sqrt(sqrDist);// the actual distance between the two shapes' centers
float minMovement = (totalRadius - dist);// the difference between the total radius and the actual distance tells us how far they intersect.
minMovement /= dist;// Divide by the distance to "scale" this movement so we can "scale" our distance vector (distX, distY, and distZ)
float mvmtX = distX * minMovement * 0.5f;// The minimum movement on the x-axis to resolve the collision
float mvmtY = distY * minMovement * 0.5f;// The minimum movement on the y-axis to resolve the collision
float mvmtZ = distZ * minMovement * 0.5f;// The minimum movement on the z-axis to resolve the collision
// For the sake of simplicity, we'll just have them "pop" out of each other, and won't
// be doing any interpolation to "smooth" the spheres' interaction.
//
// However, to ensure that we move the correct collider in the correct direction, we
// need to see which one is on which side of the other, along the three axes.
if(sphere0.position.x < sphere1.position.x)
{
sphere0.position.x -= mvmtX;
sphere1.position.x += mvmtX;
}
else
{
sphere0.position.x += mvmtX;
sphere1.position.x -= mvmtX;
}
// Repeat this process for the other two axes
if(sphere0.position.y < sphere1.position.y)
{
sphere0.position.y -= mvmtY;
sphere1.position.y += mvmtY;
}
else
{
sphere0.position.y += mvmtY;
sphere1.position.y -= mvmtY;
}
if(sphere0.position.z < sphere1.position.z)
{
sphere0.position.z -= mvmtZ;
sphere1.position.z += mvmtZ;
}
else
{
sphere0.position.z += mvmtZ;
sphere1.position.z -= mvmtZ;
}
Lastly, calculating the proper radius of the sphere to get the desired effect for collision detection about a cube can be done in one of three ways:
Using a circumscribed sphere (the sphere "touches" the corners of the cube), the formula for the radius is sqrt(3)*edgeLength*0.5. You will get an "overreactive" collision detection system, in that it will detect collisions that are reasonably far outside the volume of the cube, due to the radius being able to reach out to the corners of the box. The largest point of error will be at the center of one of the faces of the cube, where the sphere will overextend the cube by 1/sqrt(3) times the side length of the cube.
The second method would be to use an inscribed sphere, where the sphere is tangent to the faces of the cube (the sphere "touches" the centers of each cube face) and has a radius that is calculated as edgeLength*0.5. Again, there will be error, but this one will actually tend to have a bit MORE error, since it will be "underreactive" at 8 points, rather than being "overreactive" at 6, as the last one was. The amount of distance each corner will be "underreactive" (how far between the corners of the cube and the closest point on the surface of the sphere) is the same as the distance of the overreactiveness of the previous one, roughly 1/sqrt(3) times the side length.
The last method, and most accurate, is the to calculate the sphere such that it is tangent to the edges of the cube. The formula for this one's radius is edgeLength/sqrt(2). This sphere will "touch" the center of each edge of the cube, and will "overestimate" on every face, and "underestimate" on every corner. However, the distance of the overestimation/underestimation is considerably less, and generally more tolerable, being only (sqrt(3)*sqrt(2))/2 times the side length further from where it "should" be, at any point (giving arpproximately 1.4 times more accurate results, for a collision).
The choice is yours as to which best suits your needs. The first one has the best "corner" detection, but he worst "face" detection, the second has the best "face" detection and the worst "corner" detection, and the third has more or less the "average" of the first two, giving it the most "reliable" accuracy, if all cases are a possibility.