I'm creating an attack boat game in c++ and I have an issue with my boat following the mouse around the screen. My plan is to have the boat follow the mouse more like a boat (slow rotations, instead of instantaneous whilst taking about 4 seconds to do a 360 turn) and for the most part it does what it should.
The bug happens when the mouse is on the left side of the screen (as soon as my mouse crosses the -x axis), as the boat follows the mouse, the boat turns in the wrong direction and does a 360, instead of following the mouse.
This is the code i'm using to do my boat turning.
angle = atan2(delta_y, delta_x) * 180.0 / PI;
//Rotate the boat towards the mouse and
//make the boat turn more realistically
if (angle - rotate > 0) {
rotate += 1.0f; // turns left
} else if (angle - rotate < 0) {
rotate -= 1.0f; // turns right
}
if (angle - rotate >= 360.0f) {
rotate = 0.0f;
}`
you forget to clamp the angle difference. It should be on interval <-pi,+pi> [rad] so any angle difference outside this interval will lead to such problems. Try this instead:
angle = atan2(delta_y, delta_x) * 180.0 / PI; // target [deg]
da = angle-rotate; // unclamped delta [deg]
while (da<-180.0f) da+=360.0f;
while (da>+180.0f) da-=360.0f;
if (da >= +1.0f) rotate += 1.0f;
else if (da <= -1.0f) rotate -= 1.0f;
else rotate = 0.0f;
Related
Firstly, this problem is not unique to the current project I am currently working on and has happened several times before. Here is the problem.
I have a Triangle struct. olc::vf2d is a vector class with x and y components.
struct Triangle
{
olc::vf2d p1 = { 0.0f, -10.0f };
olc::vf2d p2 = { -5.0f, 5.0f };
olc::vf2d p3 = { 5.0f, 5.0f };
};
I create a triangle along with position and angle for it.
Triangle triangle
olc::vf2d position = { 0.0f, 0.0f };
float angle = 0.0f;
Now, I rotate (and offset) the triangle as so:
float x1 = triangle.p1.x * cosf(angle) - triangle.p1.y * sinf(-angle) + position.x;
float y1 = triangle.p1.x * sinf(-angle) + triangle.p1.y * cosf(angle) + position.y;
float x2 = triangle.p2.x * cosf(angle) - triangle.p2.y * sinf(-angle) + position.x;
float y2 = triangle.p2.x * sinf(-angle) + triangle.p2.y * cosf(angle) + position.y;
float x3 = triangle.p3.x * cosf(angle) - triangle.p3.y * sinf(-angle) + position.x;
float y3 = triangle.p3.x * sinf(-angle) + triangle.p3.y * cosf(angle) + position.y;
When I increase the angle every frame and draw it, it rotates and works as expected. But now here is the problem. When I try to calculate the direction for it to move towards, like this:
position.x -= cosf(angle) * elapsedTime;
position.y -= sinf(-angle) * elapsedTime;
It moves, but is look 90 degrees off from the rotation. Example, it is facing directly up and is moving to the right.
Up until this point, I have always solved this problem by using different angles values, i.e taking away 3.14159f / 2.0f radians from angle used in direction calculation
position.x -= cosf(angle - (3.14159f / 2.0f));
position.y -= sinf(-angle - (3.14159f / 2.0f));
or vice-versa and this fixes the problem (now it moves in the direction it is facing).
But now I want to know exactly why this happens and a proper way to solve this problem, many thanks.
There are some missing items to be able to diagnose. You have to have some kind of coordinate system. Is it a right-handed coordinate system or a left-handed coordinate system. You determine this by taking your X/Y origin, and visualizing your hand over it with your thumb pointing towards you. When the X-axis rotates counter-clockwise, i.e. the way the fingers of your right hand curl when held as visualized, does the positive X-axis move towards the positive Y-axis (right-handed system) or does it move towards the negative Y-axis (left-handed system).
As an example, most algebra graphing is done with a right-handed system, but on the raw pixels of a monitor positive Y is down instead of up as typically seen in algebra.
Direction of motion should be independent of rotation angle -- unless you really want them coupled, which is not typically the case.
Using two different variables for direction-of-motion and angle-of-rotation will allow you to visually and mentally decouple the two and see better what is happening.
Now, typically -- think algebra -- angles for rotation are measured starting from "east" -- pointing to the right -- is zero degrees, "north" is 90 degrees -- pointing up -- and so on.
If you want to move "straight up" you are not moving in the zero-degree direction, but rather in the 90-degree direction. So if your rotation is zero degrees but movement is desired to be "straight up" like the triangle points, then you should be using a 90-degree offset for your movement versus your rotation.
If you decouple rotation and motion, this behavior is much easier to understand and observe.
In OpenGL, I'm building a football game that allows you to shoot a ball by first moving height indicators left and right, before shooting based on the indicators when a button is pressed. Here's what it looks like:
Football Game Visual
When these indicators are moved, my ball needs to travel at the height of the vertical indicator (y), and left or right direction if the vertical one (x).
Firstly, here's the code that moves my indicators (which are just textures being drawn in my RenderScene() function)
void SpecialKeys(int key, int x, int y){
if (key == GLUT_KEY_RIGHT) { // moves the bottom indicator RIGHT
horizontalBarX += 5.0f;
}
if (key == GLUT_KEY_LEFT) {
horizontalBarX -= 5.0f; // moves the bottom indicator LEFT
}
if (key == GLUT_KEY_UP) { // moves the top indicator UP
verticalBarY += 5.0f;
verticalBarX += 1.0f;
}
if (key == GLUT_KEY_DOWN) { // moves the top indicator DOWN
verticalBarY -= 5.0f;
verticalBarX -= 1.0f;
}
}
Calculations for my football to move
Now to get my football to move after the indicators have be moved, I need to apply the following calculations to the x, y and z axis of the ball:
x = sin(theta) * cos (phi) y = cos(theta) * sin(phi) z = cos(theta)
where theta = angle in z-x, and phi = angle in z-y
So with this, I have attempted to get the values of both theta and phi angles first, by simply incrementing them depending on what height indicators you have pressed in the SpecialKeys() function:
void SpecialKeys(int key, int x, int y){
if (key == GLUT_KEY_RIGHT) { // moves the bottom indicator RIGHT
horizontalBarX += 5.0f;
theta += 5; // Increase theta angle by 5
}
if (key == GLUT_KEY_LEFT) {
horizontalBarX -= 5.0f; // moves the bottom indicator LEFT
theta -= 5; // Decrease theta angle by 5
}
if (key == GLUT_KEY_UP) { // moves the top indicator UP
verticalBarY += 5.0f;
verticalBarX += 1.0f;
phi += 5; // Increase theta angle by 5
}
if (key == GLUT_KEY_DOWN) { // moves the top indicator DOWN
verticalBarY -= 5.0f;
verticalBarX -= 1.0f;
phi -= 5; // Decrease phi angle by 5
}
}
Now that I have the angles, I want to plug in the calculated values into the drawFootball() parameters, which by the way is initially called in my RenderScene function as...
drawFootBall(0, 40, 500, 50); // x,.y, z, r
...and here's how I'm attempting to launch the ball with the calculations above:
void SpecialKeys(int key, int x, int y){
// indicator if statements just above this line
if (key == GLUT_KEY_F1) {
drawFootBall(sin(theta)*cos(phi), cos(theta)*sin(phi), cos(theta), 50);
}
}
But when I go to click the launch button F1, nothing happens at all. Where have I messed up?
EDIT:
If it helps, here's my drawFootball() function:
void drawFootBall(GLfloat x, GLfloat y, GLfloat z, GLfloat r)
{
glPushMatrix();
glFrontFace(GL_CCW);
glTranslatef(x,y,z);
//create ball texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textures[TEXTURE_BALL]);
//glDisable(GL_LIGHTING);
glColor3f(0.5,0.5,0.5);
quadricFootball = gluNewQuadric();
gluQuadricDrawStyle(quadricFootball, GLU_FILL);
gluQuadricNormals(quadricFootball, GLU_SMOOTH);
gluQuadricOrientation(quadricFootball, GLU_OUTSIDE);
gluQuadricTexture(quadricFootball, GL_TRUE);
gluSphere(quadricFootball, r, 85, 50);
glDisable(GL_TEXTURE_2D);
glPopMatrix();
}
Firstly make sure your theta and phi are in the right units i.e. Radians. If in degrees convert them to radians by using sin(theta * PI/180.0f) and so on, assuming PI is defined.
I believe what you are computing there is a direction vector for the Ball. The d(x,y,z) is the direction in which the ball should travel, (assuming there is no gravity or other forces). Its probably the direction in which the ball is kicked.
I think if you simply wanted to move your ball, you need to multiply this direction with a length. Since your ball has a radius of 50 units, try translating to 2 times this radius.
glTranslatef(2.0f*r*x,2.0f*r*y,2.0f*r*z);
This will move the ball to 2 times its radius in your desired direction.
However you probably want to have some physics to have a more realistic movement.
Hello i am having a strange issue with my mouse movement in openGL. Here is my code for moving the camera with my mouse
void camera(int x, int y)
{
GLfloat xoff = x- lastX;
GLfloat yoff = lastY - y; // Reversed since y-coordinates range from bottom to top
lastX = x;
lastY = y;
GLfloat sensitivity = 0.5f;
xoff *= sensitivity;
yoff *= sensitivity;
yaw += xoff; // yaw is x
pitch += yoff; // pitch is y
// Limit up and down camera movement to 90 degrees
if (pitch > 89.0)
pitch = 89.0;
if (pitch < -89.0)
pitch = -89.0;
// Update camera position and viewing angle
Front.x = cos(convertToRads(yaw) * cos(convertToRads(pitch)));
Front.y = sin(convertToRads(pitch));
Front.z = sin(convertToRads(yaw)) * cos(convertToRads(pitch));
}
convertToRads() is a small function i created to convert the mouse coordinates to rads.
With this code i can move my camera how ever i want but if i try to go all the way up when i reach around 45 degrees it rotates 1-2 times around x-axis and then continues to increase y-axis. I can't understand if i have done something wrong so if anyone could help i would appreciate it.
It seems you have misplaced a paranthesis:
Front.x = cos(convertToRads(yaw) * cos(convertToRads(pitch)));
instead of:
Front.x = cos(convertToRads(yaw)) * cos(convertToRads(pitch));
I have a 3d program in DirectX and I want to give the mouse control to the camera. The problem is the mouse moves right off the screen ( In windowed mode ) then the camera doesn't turn anymore. I tried to use SetCusorPos to just lock it in place after the mouse is moved. That way I could get a dx and then set the mouse back to the center of the screen. I ended up getting a endless white screen. Here is my camera/mouse movement code so far. If you need any more information just ask.
void PhysicsApp::OnMouseMove(WPARAM btnState, int x, int y)
{
// Make each pixel correspond to a quarter of a degree.
float dx = XMConvertToRadians(0.25f*static_cast<float>(x - mLastMousePos.x));
float dy = XMConvertToRadians(0.25f*static_cast<float>(y - mLastMousePos.y));
// Update angles based on input to orbit camera around box.
mTheta += -dx;
mPhi += -dy;
// Update players direction to always face forward
playerRotation.y = -mTheta;
// Restrict the angle mPhi.
mPhi = MathHelper::Clamp(mPhi, 0.1f, MathHelper::Pi-0.1f);
if( (btnState & MK_RBUTTON) != 0 )
{
// Make each pixel correspond to 0.2 unit in the scene.
float dx = 0.05f*static_cast<float>(x - mLastMousePos.x);
float dy = 0.05f*static_cast<float>(y - mLastMousePos.y);
// Update the camera radius based on input.
mRadius += dx - dy;
// Restrict the radius.
mRadius = MathHelper::Clamp(mRadius, 5.0f, 50.0f);
}
mLastMousePos.x = x;
mLastMousePos.y = y;
}
I'm creating the view matrix for my camera using its current orientation (quaternion) and its current position.
void Camera::updateViewMatrix()
{
view = glm::gtx::quaternion::toMat4(orientation);
// Include rotation (Free Look Camera)
view[3][0] = -glm::dot(glm::vec3(view[0][0], view[0][1], view[0][2]), position);
view[3][1] = -glm::dot(glm::vec3(view[1][0], view[1][1], view[1][2]), position);
view[3][2] = -glm::dot(glm::vec3(view[2][0], view[2][1], view[2][2]), position);
// Ignore rotation (FPS Camera)
//view[3][0] = -position.x;
//view[3][1] = -position.y;
//view[3][2] = -position.z;
view[3][3] = 1.0f;
}
There is a problem with this in that I do not believe the quaternion to matrix calculation is giving the correct answer. Translating the camera works as expected but rotating it causes incorrect behavior.
I am rotating the camera using the difference between the current mouse position and the the centre of the screen (resetting the mouse position each frame)
int xPos;
int yPos;
glfwGetMousePos(&xPos, &yPos);
int centreX = 800 / 2;
int centreY = 600 / 2;
rotate(xPos - centreX, yPos - centreY);
// Reset mouse position for next frame
glfwSetMousePos(800 / 2, 600 / 2);
The rotation takes place in this method
void Camera::rotate(float yawDegrees, float pitchDegrees)
{
// Apply rotation speed to the rotation
yawDegrees *= lookSensitivity;
pitchDegrees *= lookSensitivity;
if (isLookInverted)
{
pitchDegrees = -pitchDegrees;
}
pitchAccum += pitchDegrees;
// Stop the camera from looking any higher than 90 degrees
if (pitchAccum > 90.0f)
{
//pitchDegrees = 90.0f - (pitchAccum - pitchDegrees);
pitchAccum = 90.0f;
}
// Stop the camera from looking any lower than 90 degrees
if (pitchAccum < -90.0f)
{
//pitchDegrees = -90.0f - (pitchAccum - pitchDegrees);
pitchAccum = -90.0f;
}
yawAccum += yawDegrees;
if (yawAccum > 360.0f)
{
yawAccum -= 360.0f;
}
if (yawAccum < -360.0f)
{
yawAccum += 360.0f;
}
float yaw = yawDegrees * DEG2RAD;
float pitch = pitchDegrees * DEG2RAD;
glm::quat rotation;
// Rotate the camera about the world Y axis (if mouse has moved in any x direction)
rotation = glm::gtx::quaternion::angleAxis(yaw, 0.0f, 1.0f, 0.0f);
// Concatenate quaterions
orientation = orientation * rotation;
// Rotate the camera about the world X axis (if mouse has moved in any y direction)
rotation = glm::gtx::quaternion::angleAxis(pitch, 1.0f, 0.0f, 0.0f);
// Concatenate quaternions
orientation = orientation * rotation;
}
Am I concatenating the quaternions correctly for the correct orientation?
There is also a problem with the pitch accumulation in that it restricts my view to ~±5 degrees rather than ±90. What could be the cause of that?
EDIT:
I have solved the problem for the pitch accumulation so that its range is [-90, 90]. It turns out that glm uses degrees and not vectors for axis angle and the order of multiplication for the quaternion concatenation was incorrect.
// Rotate the camera about the world Y axis
// N.B. 'angleAxis' method takes angle in degrees (not in radians)
rotation = glm::gtx::quaternion::angleAxis(yawDegrees, 0.0f, 1.0f, 0.0f);
// Concatenate quaterions ('*' operator concatenates)
// C#: Quaternion.Concatenate(ref rotation, ref orientation)
orientation = orientation * rotation;
// Rotate the camera about the world X axis
rotation = glm::gtx::quaternion::angleAxis(pitchDegrees, 1.0f, 0.0f, 0.0f);
// Concatenate quaterions ('*' operator concatenates)
// C#: Quaternion.Concatenate(ref orientation, ref rotation)
orientation = rotation * orientation;
The problem that remains is that the view matrix rotation appears to rotate the drawn object and not look around like a normal FPS camera.
I have uploaded a video to YouTube to demonstrate the problem. I move the mouse around to change the camera's orientation but the triangle appears to rotate instead.
YouTube video demonstrating camera orientation problem
EDIT 2:
void Camera::rotate(float yawDegrees, float pitchDegrees)
{
// Apply rotation speed to the rotation
yawDegrees *= lookSensitivity;
pitchDegrees *= lookSensitivity;
if (isLookInverted)
{
pitchDegrees = -pitchDegrees;
}
pitchAccum += pitchDegrees;
// Stop the camera from looking any higher than 90 degrees
if (pitchAccum > 90.0f)
{
pitchDegrees = 90.0f - (pitchAccum - pitchDegrees);
pitchAccum = 90.0f;
}
// Stop the camera from looking any lower than 90 degrees
else if (pitchAccum < -90.0f)
{
pitchDegrees = -90.0f - (pitchAccum - pitchDegrees);
pitchAccum = -90.0f;
}
// 'pitchAccum' range is [-90, 90]
//printf("pitchAccum %f \n", pitchAccum);
yawAccum += yawDegrees;
if (yawAccum > 360.0f)
{
yawAccum -= 360.0f;
}
else if (yawAccum < -360.0f)
{
yawAccum += 360.0f;
}
orientation =
glm::gtx::quaternion::angleAxis(pitchAccum, 1.0f, 0.0f, 0.0f) *
glm::gtx::quaternion::angleAxis(yawAccum, 0.0f, 1.0f, 0.0f);
}
EDIT3:
The following multiplication order allows the camera to rotate around its own axis but face the wrong direction:
glm::mat4 translation;
translation = glm::translate(translation, position);
view = glm::gtx::quaternion::toMat4(orientation) * translation;
EDIT4:
The following will work (applying the translation matrix based on the position after then rotation)
// Rotation
view = glm::gtx::quaternion::toMat4(orientation);
// Translation
glm::mat4 translation;
translation = glm::translate(translation, -position);
view *= translation;
I can't get the dot product with each orientation axis to work though
// Rotation
view = glm::gtx::quaternion::toMat4(orientation);
glm::vec3 p(
glm::dot(glm::vec3(view[0][0], view[0][1], view[0][2]), position),
glm::dot(glm::vec3(view[1][0], view[1][1], view[1][2]), position),
glm::dot(glm::vec3(view[2][0], view[2][1], view[2][2]), position)
);
// Translation
glm::mat4 translation;
translation = glm::translate(translation, -p);
view *= translation;
In order to give you a definite answer, I think that we would need the code that shows how you're actually supplying the view matrix and vertices to OpenGL. However, the symptom sounds pretty typical of incorrect matrix order.
Consider some variables:
V represents the inverse of the current orientation of the camera (the quaternion).
T represents the translation matrix holding the position of the camera. This should be an identity matrix with negation of the camera's position going down the fourth column (assuming that we're right-multiplying column vectors).
U represents the inverse of the change in orientation.
p represents a vertex in world space.
Note: all of the matrices are inverse matrices because the transformations will be applied to the vertex, not the camera, but the end result is the same.
By default the OpenGL camera is at the origin looking down the negative-z axis. When the view isn't changing (U==I), then the vertex's transformation from world coordinates to camera coordinates should be: p'=TVp. You first orient the camera (by rotating the world in the opposite direction) and then translate the camera into position (by shifting the world in the opposite direction).
Now there are a few places to put U. If we put U to the right of V, then we get the behavior of a first-person view. When you move the mouse up, whatever is currently in view rotates downward around the camera. When you move the mouse right, whatever is in view rotates to the left around the camera.
If we put U between T and V, then the camera turns relative to the world's axes instead of the camera's. This is strange behavior. If V happens to turn the camera off to the side, then moving the mouse up and down will make the world seem to 'roll' instead of 'pitch' or 'yaw'.
If we put U left of T, then the camera rotates around the world's axes around the world's origin. This can be even stranger because it makes the camera fly through world faster the farther the camera is from the origin. However, because the rotation is around the origin, if the camera happens to be looking at the origin, objects there will just appear to be turning around. This is sort of what you're seeing because of the dot-products that you're taking to rotate the camera's position.
You check to make sure that pitchAccum stays within [-90,90], but you've commented out the portion that would make use of that fact. This seems odd to me.
The way that you left-multiply pitch but right-multiply yaw makes it so that your quaternions aren't doing much for you. They're just holding your Euler angles. Unless orientation changes are coming in from other places, you could simply say that orientation = glm::gtx::quaternion::angleAxis(pitchAccum*DEG2RAD, 1.0f, 0.0f, 0.0f) * glm::gtx::quaternion::angleAxis(yawAccum*DEG2RAD, 0.0f, 1.0f, 0.0f); and overwrite the old orientation completely.
From what I understand in this tutorial, there might be a reason why pitch angle is restricted at 90 degrees.
Regardless of using quaternions or a look at matrix, at the end, we give an initial orientation to the Camera. In quaternions, this is the initial value of the orientation, in lookAt, it is the initial value of the up vector.
If the direction facing towards the camera is parallel to this initial vector, then the cross product of these will be zero, which means the camera might have any orientation if pitch is 90 or -90 degrees.
In the internal implementation of toMat4(orientation) this would result in one of your x_dir/y_dir/z_dir vectors to be a zero vector, which would mean that your can have any orientation. This is also discussed in this book, which says that if Y angle is 90 degrees, a degree of freedom is lost (Edward Angel and Dave Shreiner, Interactive Computer Graphics, A Top-Down Approach with WebGL, Seventh Edition, Addison-Wesley 2015.), which is discussed as Gimbal Lock.
I can see that you are aware of this problem, but in your code, the yaw angle is still set to 90 degrees if it overflows 90, leaving your Camera in an invalid state. You should consider something like this instead:
if (pitchAccum > 89.999f && pitchAccum <= 90.0f)
{
pitchAccum = 90.001f;
}
else if (pitchAccum < -89.999f && pitchAccum >= -90.0f)
{
pitchAccum = -90.001f;
}
if (pitchAccum >= 360.0f)
{
pitchAccum = 0.0f;
}
else if (pitchAccum <= -360.0f)
{
pitchAccum = 0.0f;
}
Or you can define another custom action of your choice when pitchAccum is 90 degrees.