I'm a student new to opengl. Currently, I'm doing a project that creates a scene.
Right now, my team is using gluLookAt() for my camera. What I want to accomplish is to try and rotate the LookAt vector around a certain point, namely where the camera is looking at.
This accomplishes a sort of "swaying in a circle". I need this because I am making a dart game for the scene, and my camera stay still, but I need it to move in a circle, but still allow the user's mouse to influence it. I also need it to create a drunken movement. That is why I am not considering rotating the Up or Eye vectors.
Currently, my look at code is like this.
int deltax = x - mouse.mX;
int deltay = y - mouse.mY;
cameradart.mYaw -= ((deltax/360.0) * 3.142) * 0.5;
cameradart.mPitch -= deltay * 0.02;
mouse.mX = x;
mouse.mY = y;
cameradart.lookAt.x = sin (cameradart.mYaw);
cameradart.lookAt.y = cameradart.mPitch ;
cameradart.lookAt.z = cos (cameradart.mYaw);
gluLookAt (cameradart.eye.x, cameradart.eye.y, cameradart.eye.z,
cameradart.eye.x + cameradart.lookAt.x, cameradart.eye.y + cameradart.lookAt.y,
cameradart.eye.z + cameradart.lookAt.z,
cameradart.up.x, cameradart.up.y, cameradart.up.z);
I know that it could be done easier using a different camera, but I really don't want to mess with my team's code by not using gluLookAt().
There's a couple of solutions in my mind, I'll tell you the easiest to understand/implement as a new graphics student
Assuming at first you're looking at (0,0,1) -store that vector-:
Think of a point that's drawing a circle and you're looking at it,
-Do it first to turn right and left (2D on X & Z)
-Let HDiff be the horizontal difference between old mouse position and the new one
-Update the x = cos(HDiff)
-Update the z = sin(HDiff)
*I didn't try it but it should work :)
If you want to be able to manipulate the camera, using a camera matrix is a much more effective mechanism than working with gluLookat. GLM is a good library for matrix and vector math and includes a lookat mechanism you can use to initialize the matrix, or you can just initialize it with a series of operations. However, remember that lookat produces a view matrix, and the view matrix is the inverse of the camera matrix.
This piece of code has a demonstration of what I'm talking about. Specifically look at the player member variable and how it's manipulated
glm::mat4 player;
...
glm::vec3 playerPosition(0, eyeHeight, ipd * 4.0f);
player = glm::inverse(glm::lookAt(playerPosition, glm::vec3(0, eyeHeight, 0), GlUtils::Y_AXIS));
This approach lets you apply changes like rotation and translation directly to the player matrix
// Rotate on the Y axis
player = glm::rotate(player, angle, glm::vec3(0, 1, 0));
This is much more intuitive than manipulating the view matrix, since changes to the view matrix always have to be the inverse of what you'd do to the player matrix.
When you're ready to render you need to convert the player matrix to a view matrix by taking it's inverse. In my example it's done like this:
gl::Stacks::modelview().top() = riftOrientation * glm::inverse(player);
This is because I'm using an application based modelview matrix stack that gets applied to the Shader programs I'm running.
For an OpenGL 1.x program, you'd instead use LoadMatrix
glMatrixMode(GL_MODELVIEW);
glm::mat4 modelview = glm::inverse(player);
glLoadMatrixf(&modelview);
Related
I've implemented a fps camera based on the up, right and view vectors from this.
Right now I want to be able to interact with the world by placing cubes in a minecraft style.
My lookAt vector is the sum of the view vector and the camera position, so my first attempt was to draw a cube at lookAt, but this is causing a strange behaviour.
I compute every vector like in the web I mentioned (such that lookAt = camera_position + view_direction) but the cube drawn is always arround me. I've tried several things like actually placing it (rounding the lookAt) and it appears near the wanted position but not at the place i'm looking at.
Given these vectors, how can I draw that's centered at the position that my camera is looking but a little bit further (exactly like minecraft)?
but the cube drawn is always arround me.
Yeah and that's obvious. You place cubes on the sphere surface of radius view_direction with center at camera_position.
Given these vectors, how can I draw that's centered at the position
that my camera is looking but a little bit further (exactly like
minecraft)?
You need to place cubes at the intersection of the view vector with the scene geometry. In the simplest case, it can be just "ground" plane, so you need intersect view vector with "ground" plane. Then you need to round the intersection xyz coordinates to the nearest grid node xyz = round(xyz / cubexyz)*cubexyz where cubexyz - cube size.
Approximate code:
Vector3D intersectPoint(Vector3D rayVector, Vector3D rayPoint, Vector3D planeNormal, Vector3D planePoint) {
Vector3D diff = rayPoint - planePoint;
double prod1 = diff.dot(planeNormal);
double prod2 = rayVector.dot(planeNormal);
double prod3 = prod1 / prod2;
return rayPoint - rayVector * prod3;
}
.......
Vector3D cubePos = intersectPoint(view_direction, camera_position, Vector3D(0, 1, 0), Vector3D(0, 0, 0));
cubePos = round(cubePos / cubeSize) * cubeSize;
AddCube(cubePos);
It's hard to tell without having images to look at, but lookAt is most likely your normalized forward vector? If i understood you correctly, you'd want to do something like objectpos = camerapos + forward * 10f (where 10f is the distance you want to place the object in front of you in 3d space units) to make sure that it's placed a few units in front of your fps controller.
actually, if view_direction is your normalized forward vector and your lookAt is camera_pos + view_direction, then you'd end up with something very close to your camera position, which would explain why the cube spawns inside you. either way, my suggestion should still work :)
I am trying to rotate a "cube" full of little cubes using keyboard which works but not so great.
I am struggling with setting the pivot point of rotation to the very center of the big "cube" / world. As you can see on this video, center of front (initial) face of the big cube is the pivot point for my rotation right now, which is a bit confusing when I rotate the world a little bit.
To explain it better, it looks like I am moving initial face of the cube when using keys to rotate the cube. So the pivot point might be okay from this point of view, but what is wrong in my code? I don't understand why it is moving by front face, not the entire cube by its very center?
In case of generating all little cubes, I call a function in 3 for loops (x, y, z) and the function returns cubeMat so I have all cubes generated as you can see on the video.
cubeMat = scale(cubeMat, {0.1f, 0.1f, 0.1f});
cubeMat = translate(cubeMat, {positioning...);
For rotation itself, a short example of rotation to left looks like this:
mat4 total_rotation; //global variable - never resets
mat4 rotation; //local variable
if(keysPressed[GLFW_KEY_LEFT]){
timer -= delta;
rotation = rotate(mat4{}, -delta, {0, 1, 0});
}
... //rest of key controls
total_rotation *= rotation;
And inside of those 3 for cycles is also this:
program.setUniform("ModelMatrix", total_rotation * cubeMat);
cube.render();
I have read that I should use transformation to set the pivot point to the middle but in this case, how can I set the pivot point inside of little cube which is in center of world? That cube is obviously x=2, y=2, z=2 since in for cycles, I generate cubes starting at x=0.
You are accumulating the rotation matrices by right-multiplication. This way, all rotations are performed in the local coordinate systems that result from all previous transformations. And this is why your right-rotation results in a turn after an up-rotation (because it is a right-rotation in the local coordinate system).
But you want your rotations to be in the global coordinate system. Thus, simply revert the multiplication order:
total_rotation = rotation * total_rotation;
I'm currently in the process of finishing the implementation for a camera that functions in the same way as the camera in Maya. The part I'm stuck in the tumble functionality.
The problem is the following: the tumble feature works fine so long as the position of the camera is not parallel with the up vector (currently defined to be (0, 1, 0)). As soon as the camera becomes parallel with this vector (so it is looking straight up or down), the camera locks in place and will only rotate around the up vector instead of continuing to roll.
This question has already been asked here, unfortunately there is no actual solution to the problem. For reference, I also tried updating the up vector as I rotated the camera, but the resulting behaviour is not what I require (the view rolls as a result of the new orientation).
Here's the code for my camera:
using namespace glm;
// point is the position of the cursor in screen coordinates from GLFW
float deltaX = point.x - mImpl->lastPos.x;
float deltaY = point.y - mImpl->lastPos.y;
// Transform from screen coordinates into camera coordinates
Vector4 tumbleVector = Vector4(-deltaX, deltaY, 0, 0);
Matrix4 cameraMatrix = lookAt(mImpl->eye, mImpl->centre, mImpl->up);
Vector4 transformedTumble = inverse(cameraMatrix) * tumbleVector;
// Now compute the two vectors to determine the angle and axis of rotation.
Vector p1 = normalize(mImpl->eye - mImpl->centre);
Vector p2 = normalize((mImpl->eye + Vector(transformedTumble)) - mImpl->centre);
// Get the angle and axis
float theta = 0.1f * acos(dot(p1, p2));
Vector axis = cross(p1, p2);
// Rotate the eye.
mImpl->eye = Vector(rotate(Matrix4(1.0f), theta, axis) * Vector4(mImpl->eye, 0));
The vector library I'm using is GLM. Here's a quick reference on the custom types used here:
typedef glm::vec3 Vector;
typedef glm::vec4 Vector4;
typedef glm::mat4 Matrix4;
typedef glm::vec2 Point2;
mImpl is a PIMPL that contains the following members:
Vector eye, centre, up;
Point2 lastPoint;
Here is what I think. It has something to do with the gimbal lock, that occurs with euler angles (and thus spherical coordinates).
If you exceed your minimal(0, -zoom,0) or maxima(0, zoom,0) you have to toggle a boolean. This boolean will tell you if you must treat deltaY positive or not.
It could also just be caused by a singularity, therefore just limit your polar angle values between 89.99° and -89.99°.
Your problem could be solved like this.
So if your camera is exactly above (0, zoom,0) or beneath (0, -zoom,0) of your object, than the camera only rolls.
(I am also assuming your object is at (0,0,0) and the up-vector is set to (0,1,0).)
There might be some mathematical trick to resolve this, I would do it with linear algebra though.
You need to introduce a new right-vector. If you make a cross product, you will get the camera-vector. Camera-vector = up-vector x camera-vector. Imagine these vectors start at (0,0,0), then easily, to get your camera position just do this subtraction (0,0,0)-(camera-vector).
So if you get some deltaX, you rotate towards the right-vector(around the up-vector) and update it.
Any influence of deltaX should not change your up-vector.
If you get some deltaY you rotate towards the up-vector(around the right-vector) and update it. (This has no influence on the right-vector).
https://en.wikipedia.org/wiki/Rotation_matrix at Rotation matrix from axis and angle you can find a important formula.
You say u is your vector you want to rotate around and theta is the amount you want to pivot. The size of theta is proportional to deltaX/Y.
For example: We got an input from deltaX, so we rotate around the up-vector.
up-vector:= (0,1,0)
right-vector:= (0,0,-1)
cam-vector:= (0,1,0)
theta:=-1*30° // -1 due to the positive mathematical direction of rotation
R={[cos(-30°),0,-sin(-30°)],[0,1,0],[sin(-30°),0,cos(-30°)]}
new-cam-vector=R*cam-vector // normal matrix multiplication
One thing is left to be done: Update the right-vector.
right-vector=camera-vector x up-vector .
I'm trying to implement a quaternion-based camera, but when moving around the X and Y axis, the camera produces an unwanted roll on the Z axis. I want to be able to look around freely on all axis.
I've read other topics about this problem (for example: http://www.flipcode.com/forums/thread/6525 ), but I'm not getting what is meant by "Fix this by continuously rebuilding the rotation matrix by rotating around the WORLD axis, i.e around <1,0,0>, <0,1,0>, <0,0,1>, not your local coordinates, whatever they might be."
//Camera.hpp
glm::quat rotation;
//Camera.cpp
void Camera::rotate(glm::vec3 vec)
{
glm::quat paramQuat = glm::quat(vec);
rotation = paramQuat * rotation;
}
I call the rotate function like this:
cam->rotate(glm::vec3(0, 0.5, 0));
This must have to do with local/world coordinates, right? I'm just not getting it, since I thought quaternions are always based on each other (thus a quaternion can't be in "world" or "local" space?).
Also, should i use more than one quaternion for a camera?
As far as I understand it, and from looking at the code you provided, what they mean is that you shouldn't store and apply the rotation incrementally by applying rotate on the rotation quat all the time, but instead keeping track of two quaternions for each axis (X and Y in world space) and calculating the rotation vector as the product of those.
[edit: some added (pseudo)code]
// Camera.cpp
void Camera::SetRotation(glm::quat value)
{
rotation = value;
}
// controller.cpp --> probably a place where you'd want to translate user input and store your rotational state
xAngle += deltaX;
yAngle += deltaY;
glm::quat rotationX = QuatAxisAngle(X_AXIS, xAngle);
glm::quat rotationY = QuatAxisAngle(Y_AXIS, yAngle);
camera.SetRotation(rotationX * rotationY);
I am trying to follow this course about computer graphics, but I'm stuck in the homework 1. I don't understand what's the role of the vector eye and up. The descripcion of the homework can be found in this link, there's also the skeleton of the first assignment.
So far I have the following code:
// Transform.cpp: implementation of the Transform class.
#include "Transform.h"
//Please implement the following functions:
// Helper rotation function.
mat3 Transform::rotate(const float degrees, const vec3& axis) {
// Please implement this.
float radians = degrees * M_PI / 180.0f;
mat3 r1(cos(radians));
mat3 r2(0, -axis.z, axis.y, axis.z, 0, -axis.x, -axis.y, axis.x, 0);
mat3 r3(axis.x*axis.x, axis.x*axis.y, axis.x*axis.z,
axis.x*axis.y, axis.y*axis.y, axis.y*axis.z,
axis.x*axis.z, axis.z*axis.y, axis.z*axis.z);
for(int i = 0; i < 3; i++){
for(int j = 0; j < 3; j++){
r2[i][j] = r2[i][j]*sin(radians);
r3[i][j] = r3[i][j]*(1-cos(radians));
}
}
return r1 + r2 + r3;
}
// Transforms the camera left around the "crystal ball" interface
void Transform::left(float degrees, vec3& eye, vec3& up) {
eye = eye * rotate(degrees, up);
}
// Transforms the camera up around the "crystal ball" interface
void Transform::up(float degrees, vec3& eye, vec3& up) {
vec3 newAxis = glm::cross(eye, up);
}
// Your implementation of the glm::lookAt matrix
mat4 Transform::lookAt(vec3 eye, vec3 up) {
return lookAtMatrix;
}
Transform::Transform()
{
}
Transform::~Transform()
{
}
for the left method it appears to be doing the right thing, which is, rotating the object around the y-axis (actually I'm not sure if the object is moving or what I'm moving is the camera, can someone clarify?).
for the up method I cannot make it work which will be rotating the object (or camera?) around the x-axis (at least that's what I think).
finally, I don't understand what should the lookAt method should do.
Can someone help me understand the actions to be performed?
Can someone explain what are the roles of vectors eye and up?
View transforms are often implemented using a "look-at" function. The idea being that you specify where the camera is, what direction it is looking in, and what direction represents "up" in your particular space, and you get a matrix back which represents that transform.
It looks like you're trying to implement some kind "rotating ball" navigation control. That's fairly simple - horizontal movement should rotate around some "Y" axis, and vertical movement should rotate around the "right" (or X) axis. Generally those rotations work around the current view axes, rather than globally, so that the movement is intuitive. I'm not sure exactly what you're looking for there.
A look-at function works as follows.
A 3x3 matrix representing a rotation can be viewed as being composed the 3 perpendicular unit axes of the space you are transforming into. So if you can supply those vectors, you can build the matrix.
The first axis is easy. A camera is typically oriented to look along "Z", so if you take the vector representing the direction of the thing being looked at from the camera's position, then normalise it, this is the Z axis.
Then you need to define a distinct 'up' vector - (0,1,0) is typical, but you will need to choose a different one in cases where the Z-axis is pointing in the same direction.
The cross product of this 'up' vector and the 'Z' axis gives the 'X' axis - this is because the cross product gives a perpendicular vector, and what constitutes horizontal will be perpendicular to both the 'forward' direction, and 'up'.
Then the cross product of the 'X' and 'Z' axes gives the 'Y' axis (which is not necessarily the same as the 'Y' axis - consider looking towards the ceiling or towards the floor).
These three axes, normalised, (x,y,z) directly form a rotation matrix.
The translation portion of the matrix is generally the position of the camera, transformed by the rotation's inverse (such that when transforming the camera position by the lookat matrix itself, it should end up back at the origin).
1) Your course is using the OpenGL library, and your homework assignment is to fill in the skeleton module "Transform.cpp".
2) The method you're asking about is "mat4 Transform::lookAt(vec3 eye, vec3 up)":
lookAt: Finally, you need to code in the transformation matrix, given
the eye and up vectors. You will likely need to refer to the class
notes to do this. It is likely to help to dene a uvw coordinate frame
(as 3 vectors), and to build up an auxiliary 4 4 matrix M which is
returned as the result of this function. Consult class notes and
lectures for this part.
3) A hint for what these two arguments "eye" and "up" mean should be in your class notes and lectures.
4) Another hint is to "define a uvw coordinate frame (as three vectors), and build up an auxiliary 4x4 matrix ... which is returned as a result...".
5) A final hint:
Q: What's the difference between an OpenGL mat3 and mat4?
A:
What extractly mat3(a mat4 matrix) statement in glsl do?
mat3(MVI) * normal
Returns the upper 3x3 matrix from the 4x4 matrix and multiplies the
normal by that. This matrix is called the 'normal matrix'. You use
this to bring your normals from world space to eye space.
The reason why the original matrix is 4x4 and not 3x3 is because 4x4
matrices let you do affine transformations and contain useful
information for perspective rendering. But to take a normal from world
space to eye space, you just need the 3x3 model view matrix.