In display i have:
Camera c = new Camera(canvas);
glu.gluLookAt(c.eyeX, c.eyeY, c.eyeZ, c.point_X, c.point_Y, c.point_Z, 0, 1, 0);
those variables are from an object from my Camera class which has:
float eyeX = 5.0f, eyeY = 5.0f, eyeZ = 5.0f;
float camYaw = 0.0f; //camera rotation in X axis
float camPitch = 0.0f; //camera rotation in Y axis
float point_X = 10.0f;
float point_Y = 5.0f;
float point_Z = 5.0f;
i calculate the delta on the mouse movement and camYaw goes from 0 to 360 degrees
and camPitch from -90 to 90 degrees (or 0 - 2PI and -+PI/2)
this works fine but when i put the calculated point_X, point_Y, point_Z to the gluLookAt() it moves the cam in a strange way (seems it rotates the camera to an invisible sphere depending on the given radius in the equasions)
public void updateCamera() {
float radius = 5.0f;
point_X = (float) (radius * Math.cos(camYaw) * Math.sin(camPitch));
point_Z = (float) (radius * Math.sin(camPitch) * Math.sin(camYaw));
point_Y = (float) (radius * Math.cos(camPitch));
}
Im trying to convert polar to cartesian coordinates.
The larger the radius the better it "works".
Changing from degrees to radians still doesnt work.
Related
I'm studying opengl and following the tutorials on the learnopengl website (https://learnopengl.com/)
Could someone please help me convert this camera from the learnopengl tutorial, from free camera to third person ?
I've tried to put the values of the object (character), but I can't make the camera rotate around the player. Only the object walks in front of the camera, if I turn to the side, the object turns along, look like FPS camera and the object (character) being the weapon.
The code to walk (keyboard):
void processKeyboard(Camera_Movement direction, float deltaTime)
{
frontY = Front.y;//para tirar o freeCamera
if (cameraStyle == FPS_CAMERA) {
Front.y = 0;
}
float velocity = MovementSpeed * deltaTime;
if (direction == FORWARD)
Position += Front * velocity;
if (direction == BACKWARD)
Position -= Front * velocity;
if (direction == LEFT)
Position -= Right * velocity;
if (direction == RIGHT)
Position += Right * velocity;
Front.y = frontY;
}
Mouse event:
void processMouseMovement(float xoffset, float yoffset, GLboolean constrainPitch = true)
{
xoffset *= MouseSensitivity;
yoffset *= MouseSensitivity;
Yaw += xoffset;
Pitch += yoffset;
// Make sure that when pitch is out of bounds, screen doesn't get flipped
if (constrainPitch)
{
if (Pitch > 89.0f)
Pitch = 89.0f;
if (Pitch < -89.0f)
Pitch = -89.0f;
}
// Update Front, Right and Up Vectors using the updated Euler angles
updateCameraVectors();
}
To update values:
void updateCameraVectors()
{
// Calculate the new Front vector
glm::vec3 front;
front.x = cos(glm::radians(Yaw)) * cos(glm::radians(Pitch));
front.y = sin(glm::radians(Pitch));
front.z = sin(glm::radians(Yaw)) * cos(glm::radians(Pitch));
Front = glm::normalize(front);
// Also re-calculate the Right and Up vector
Right = glm::normalize(glm::cross(Front, WorldUp)); // Normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement.
Up = glm::normalize(glm::cross(Right, Front));
}
And to use:
glm::vec3 playerPosition = glm::vec3(Position.x, terrainY, Position.z) + glm::vec3(1, -0.06f, 1)
has anyone been through this and who could help me?
Thank you
Here is the code I use to create a third person camera:
float pitch = -Pitch;
// I use a 90.0f offset
// but you can play around with that value to suit your needs
float yaw = Yaw - 90.0f;
// constrain pitch so you don't look from below ground level
if (pitch < 0.0) {
pitch = 0.0;
}
// algorithm from ThinMatrix video on third person cameras
float distance = 20.0f;
float x = distance * cos(glm::radians(pitch)) * sin(radians(-yaw));
float y = distance * sin(glm::radians(pitch));
float z = distance * cos(glm::radians(pitch)) * cos(glm::radians(yaw));
glm::vec3 tpCamPos = playerPosition + vec3(-x, y, -z);
I am using legacy OpenGL to draw a mesh. I am now trying to implement an arcball class to rotate the object with the mouse. However, when i move the mouse, the object either doesn't rotate or rotates by way too big an angle.
This is the method that is called when the mouse is clicked:
void ArcBall::startRotation(int xPos, int yPos) {
int x = xPos - context->getWidth() / 2;
int y = context->getHeight() / 2 - yPos;
startVector = ArcBall::mapCoordinates(x, y).normalized();
endVector = startVector;
rotating = true;
}
This method is meant to simply map the mouse coordinates to be centered at the center of the screen and map them to the bounding sphere, resulting in a starting vector
This is the method that is called when the mouse moves:
void ArcBall::updateRotation(int xPos, int yPos) {
int x = xPos - context->getWidth() / 2;
int y = context->getHeight() / 2 - yPos;
endVector = mapCoordinates(x, y).normalized();
rotationAxis = QVector3D::crossProduct(endVector, startVector).normalized();
angle = (float)qRadiansToDegrees(acos(QVector3D::dotProduct(startVector, endVector)));
rotation.rotate(angle, rotationAxis.x(), rotationAxis.y(), rotationAxis.z());
startVector = endVector;
}
This method is again meant to map the mouse coordinates to be centered t the middle of the screen, then compute the new vector and compute a rotation axis and angle based on these two vectors.
I then use
glMultMatrixf(ArcBall::rotation.data());
to apply the rotation
I recommend to do store the mouse position at the point where you initially click in the view. Calculate the amount of the mouse movement in window coordinates. The distance of the movement has to be mapped to an angle. The rotation axis is perpendicular (normal) to the direction of the mouse movement. The result is a rotation of an object similar to this WebGL demo.
Store the current mouse position in startRotation. Note store the coordinates of the position mouse position not normalized vector:
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
startVector = QVector3D(ndcX, ndcY, 0.0);
Get the current position in updateRotation:
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
endVector = QVector3D(ndcX, ndcY, 0.0);
Calculate the vector from the start position to the end position:
QVector3D direction = endVector - startVector;
The rotation axis is normal to the direction of movement:
rotationAxis = QVector3D(-direction.y(), direction.x(), 0.0).normalized();
Note even if the type of direction is QVector3D, it is still a 2 dimensional vector. It is a vector in the XY plane of the viewport representing the mouse movement on the viewport. The z coordinate is 0. A 2 dimensional vector (x, y), can be 90 degrees counter clockwise rotated, by (-y, x).
The length of the direction vector represents tha angle of rotation. A mouse motion over the entire screen results in a vector with length 2.0. So if a dragging over the full screen should result in a full rotation, the length of the vector has to be multiplied by PI. If the a hlf rotation should be performed, then by PI/2:
angle = (float)qRadiansToDegrees(direction.length() * 3.141593);
Finally the new rotation has to be applied to the existing rotation and not to the model:
QMatrix4x4 addRotation;
addRotation.rotate(angle, rotationAxis.x(), rotationAxis.y(), rotationAxis.z());
rotation = addRotation * rotation;
Final code listing of the methods startRotation and updateRotation:
void ArcBall::startRotation(int xPos, int yPos) {
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
startVector = QVector3D(ndcX, ndcY, 0.0);
endVector = startVector;
rotating = true;
}
void ArcBall::updateRotation(int xPos, int yPos) {
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
endVector = QVector3D(ndcX, ndcY, 0.0);
QVector3D direction = endVector - startVector;
rotationAxis = QVector3D(-direction.y(), direction.x(), 0.0).normalized();
angle = (float)qRadiansToDegrees(direction.length() * 3.141593);
QMatrix4x4 addRotation;
addRotation.rotate(angle, rotationAxis.x(), rotationAxis.y(), rotationAxis.z());
rotation = addRotation * rotation;
startVector = endVector;
}
If you want a rotation around the upwards axis of the object a tilting the object along the view space x axis, then the calculation is different. First apply the rotation matrix around the y axis (up vector) then the current view matrix and finally the rotation on the x axis:
view-matrix = rotate-X * view-matrix * rotate-Y
The function update rotation has to look like this:
void ArcBall::updateRotation(int xPos, int yPos) {
// xy normalized device coordinates:
float ndcX = 2.0f * xPos / context->getWidth() - 1.0f;
float ndcY = 1.0 - 2.0f * yPos / context->getHeight();
endVector = QVector3D(ndcX, ndcY, 0.0);
QVector3D direction = endVector - startVector;
float angleY = (float)qRadiansToDegrees(-direction.x() * 3.141593);
float angleX = (float)qRadiansToDegrees(-direction.y() * 3.141593);
QMatrix4x4 rotationX;
rotationX.rotate(angleX, 1.0f 0.0f, 0.0f);
QMatrix4x4 rotationUp;
rotationX.rotate(angleY, 0.0f 1.0f, 0.0f);
rotation = rotationX * rotation * rotationUp;
startVector = endVector;
}
I'm trying to set up a google maps style zoom-to-cursor control for my opengl camera. I'm using a similar method to the one suggested here. Basically, I get the position of the cursor, and calculate the width/height of my perspective view at that depth using some trigonometry. I then change the field of view, and calculate how to much I need to translate in order to keep the point under the cursor in the same apparent position on the screen. That part works pretty well.
The issue is that I want to limit the fov to be less than 90 degrees. When it ends up >90, I cut it in half and then translate everything away from the camera so that the resulting scene looks the same as with the larger fov. The equation to find that necessary translation isn't working, which is strange because it comes from pretty simple algebra. I can't find my mistake. Here's the relevant code.
void Visual::scroll_callback(GLFWwindow* window, double xoffset, double yoffset)
{
glm::mat4 modelview = view*model;
glm::vec4 viewport = { 0.0, 0.0, width, height };
float winX = cursorPrevX;
float winY = viewport[3] - cursorPrevY;
float winZ;
glReadPixels(winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
glm::vec3 screenCoords = { winX, winY, winZ };
glm::vec3 cursorPosition = glm::unProject(screenCoords, modelview, projection, viewport);
if (isinf(cursorPosition[2]) || isnan(cursorPosition[2])) {
cursorPosition[2] = 0.0;
}
float zoomFactor = 1.1;
// = zooming in
if (yoffset > 0.0)
zoomFactor = 1/1.1;
//the width and height of the perspective view, at the depth of the cursor position
glm::vec2 fovXY = camera.getFovXY(cursorPosition[2] - zTranslate, width / height);
camera.setZoomFromFov(fovXY.y * zoomFactor, cursorPosition[2] - zTranslate);
//don't want fov to be greater than 90, so cut it in half and move the world farther away from the camera to compensate
//not working...
if (camera.Zoom > 90.0 && zTranslate*2 > MAX_DEPTH) {
float prevZoom = camera.Zoom;
camera.Zoom *= .5;
//need increased distance between camera and world origin, so that view does not appear to change when fov is reduced
zTranslate = cursorPosition[2] - tan(glm::radians(prevZoom)) / tan(glm::radians(camera.Zoom) * (cursorPosition[2] - zTranslate));
}
else if (camera.Zoom > 90.0) {
camera.Zoom = 90.0;
}
glm::vec2 newFovXY = camera.getFovXY(cursorPosition[2] - zTranslate, width / height);
//translate so that position under the cursor does not appear to move.
xTranslate += (newFovXY.x - fovXY.x) * (winX / width - .5);
yTranslate += (newFovXY.y - fovXY.y) * (winY / height - .5);
updateView = true;
}
The definition of my view matrix. Called ever iteration of the main loop.
void Visual::setView() {
view = glm::mat4();
view = glm::translate(view, { xTranslate,yTranslate,zTranslate });
view = glm::rotate(view, glm::radians(camera.inclination), glm::vec3(1.f, 0.f, 0.f));
view = glm::rotate(view, glm::radians(camera.azimuth), glm::vec3(0.f, 0.f, 1.f));
camera.Right = glm::column(view, 0).xyz();
camera.Up = glm::column(view, 1).xyz();
camera.Front = -glm::column(view, 2).xyz(); // minus because OpenGL camera looks towards negative Z.
camera.Position = glm::column(view, 3).xyz();
updateView = false;
}
Field of view helper functions.
glm::vec2 getFovXY(float depth, float aspectRatio) {
float fovY = tan(glm::radians(Zoom / 2)) * depth;
float fovX = fovY * aspectRatio;
return glm::vec2{ 2*fovX , 2*fovY };
}
//you have a desired fov, and you want to set the zoom to achieve that.
//factor of 1/2 inside the atan because we actually need the half-fov. Keep full-fov as input for consistency
void setZoomFromFov(float fovY, float depth) {
Zoom = glm::degrees(2 * atan(fovY / (2 * depth)));
}
The equations I'm using can be found from the diagram here. Since I want to have the same field of view dimensions before and after the angle is changed, I start with
fovY = tan(theta1) * d1 = tan(theta2) * d2
d2 = (tan(theta1) / tan(theta2)) * d1
d1 = distance between camera and cursor position, before fov change = cursorPosition[2] - zTranslate
d2 = distance after
theta1 = fov angle before
theta2 = fov angle after = theta1 * .5
Appreciate the help.
I'm attempting to implement an arcball style camera. I use glm::lookAt to keep the camera pointed at a target, and then move it around the surface of a sphere using azimuth/inclination angles to rotate the view.
I'm running into an issue where the view gets flipped upside down when the azimuth approaches 90 degrees.
Here's the relevant code:
Get projection and view martrices. Runs in the main loop
void Visual::updateModelViewProjection()
{
model = glm::mat4();
projection = glm::mat4();
view = glm::mat4();
projection = glm::perspective
(
(float)glm::radians(camera.Zoom),
(float)width / height, // aspect ratio
0.1f, // near clipping plane
10000.0f // far clipping plane
);
view = glm::lookAt(camera.Position, camera.Target, camera.Up);
}
Mouse move event, for camera rotation
void Visual::cursor_position_callback(GLFWwindow* window, double xpos, double ypos)
{
if (leftMousePressed)
{
...
}
if (rightMousePressed)
{
GLfloat xoffset = (xpos - cursorPrevX) / 4.0;
GLfloat yoffset = (cursorPrevY - ypos) / 4.0;
camera.inclination += yoffset;
camera.azimuth -= xoffset;
if (camera.inclination > 89.0f)
camera.inclination = 89.0f;
if (camera.inclination < 1.0f)
camera.inclination = 1.0f;
if (camera.azimuth > 359.0f)
camera.azimuth = 359.0f;
if (camera.azimuth < 1.0f)
camera.azimuth = 1.0f;
float radius = glm::distance(camera.Position, camera.Target);
camera.Position[0] = camera.Target[0] + radius * cos(glm::radians(camera.azimuth)) * sin(glm::radians(camera.inclination));
camera.Position[1] = camera.Target[1] + radius * sin(glm::radians(camera.azimuth)) * sin(glm::radians(camera.inclination));
camera.Position[2] = camera.Target[2] + radius * cos(glm::radians(camera.inclination));
camera.updateCameraVectors();
}
cursorPrevX = xpos;
cursorPrevY = ypos;
}
Calculate camera orientation vectors
void updateCameraVectors()
{
Front = glm::normalize(Target-Position);
Right = glm::rotate(glm::normalize(glm::cross(Front, {0.0, 1.0, 0.0})), glm::radians(90.0f), Front);
Up = glm::normalize(glm::cross(Front, Right));
}
I'm pretty sure it's related to the way I calculate my camera's right vector, but I cannot figure out how to compensate.
Has anyone run into this before? Any suggestions?
It's a common mistake to use lookAt for rotating the camera. You should not. The backward/right/up directions are the columns of your view matrix. If you already have them then you don't even need lookAt, which tries to redo some of your calculations. On the other hand, lookAt doesn't help you in finding those vectors in the first place.
Instead build the view matrix first as a composition of translations and rotations, and then extract those vectors from its columns:
void Visual::cursor_position_callback(GLFWwindow* window, double xpos, double ypos)
{
...
if (rightMousePressed)
{
GLfloat xoffset = (xpos - cursorPrevX) / 4.0;
GLfloat yoffset = (cursorPrevY - ypos) / 4.0;
camera.inclination = std::clamp(camera.inclination + yoffset, -90.f, 90.f);
camera.azimuth = fmodf(camera.azimuth + xoffset, 360.f);
view = glm::mat4();
view = glm::translate(view, glm::vec3(0.f, 0.f, camera.radius)); // add camera.radius to control the distance-from-target
view = glm::rotate(view, glm::radians(camera.inclination + 90.f), glm::vec3(1.f,0.f,0.f));
view = glm::rotate(view, glm::radians(camera.azimuth), glm::vec3(0.f,0.f,1.f));
view = glm::translate(view, camera.Target);
camera.Right = glm::column(view, 0);
camera.Up = glm::column(view, 1);
camera.Front = -glm::column(view, 2); // minus because OpenGL camera looks towards negative Z.
camera.Position = glm::column(view, 3);
view = glm::inverse(view);
}
...
}
Then remove the code that calculates view and the direction vectors from updateModelViewProjection and updateCameraVectors.
Disclaimer: this code is untested. You might need to fix a minus sign somewhere, order of operations, or the conventions might mismatch (Z is up or Y is up, etc...).
Hello i am having a strange issue with my mouse movement in openGL. Here is my code for moving the camera with my mouse
void camera(int x, int y)
{
GLfloat xoff = x- lastX;
GLfloat yoff = lastY - y; // Reversed since y-coordinates range from bottom to top
lastX = x;
lastY = y;
GLfloat sensitivity = 0.5f;
xoff *= sensitivity;
yoff *= sensitivity;
yaw += xoff; // yaw is x
pitch += yoff; // pitch is y
// Limit up and down camera movement to 90 degrees
if (pitch > 89.0)
pitch = 89.0;
if (pitch < -89.0)
pitch = -89.0;
// Update camera position and viewing angle
Front.x = cos(convertToRads(yaw) * cos(convertToRads(pitch)));
Front.y = sin(convertToRads(pitch));
Front.z = sin(convertToRads(yaw)) * cos(convertToRads(pitch));
}
convertToRads() is a small function i created to convert the mouse coordinates to rads.
With this code i can move my camera how ever i want but if i try to go all the way up when i reach around 45 degrees it rotates 1-2 times around x-axis and then continues to increase y-axis. I can't understand if i have done something wrong so if anyone could help i would appreciate it.
It seems you have misplaced a paranthesis:
Front.x = cos(convertToRads(yaw) * cos(convertToRads(pitch)));
instead of:
Front.x = cos(convertToRads(yaw)) * cos(convertToRads(pitch));