Animating a qt3d rotation around a specific axis - c++

I am trying to animate an object in Qt3D to rotate around a specific axis (not the origin) while performing other transformations (e.g. scaling and translating).
The following code rotates the object as I want it but without animation yet.
QMatrix4x4 mat = QMatrix4x4();
mat.scale(10);
mat.translate(QVector3D(-1.023, 0.836, -0.651));
mat.rotate(QQuaternion::fromAxisAndAngle(QVector3D(0,1,0), -20));
mat.translate(-QVector3D(-1.023, 0.836, -0.651));
//scaling here after rotating/translating shifts the rotation back to be around the origin (??)
Qt3DCore::QTransform *transform = new Qt3DCore::QTransform(root);
transform->setMatrix(mat);
//...
entity->addComponent(transform); //the entity of the object i am animating
I did not manage to incorporate a QPropertyAnimation as I desire with this code. Only animating the rotationY property does not let me include the rotation origin, so it rotates around the wrong axis. And animating the matrix property produces the end result but rotates in a way that is not desired/realistic in my scenario. So how can I animate this rotation to rotate around a given axis?
EDIT: There is a QML equivalent to what I want. There, you can specify the origin of the rotation and just animate the angle values:
Rotation3D{
id: doorRotation
angle: 0
axis: Qt.vector3d(0,1,0)
origin: Qt.vector3d(-1.023, 0.836, -0.651)
}
NumberAnimation {target: doorRotation; property: "angle"; from: 0; to: -20; duration: 500}
How can I do this in C++?

I think it is possible to use the Qt 3D: Simple C++ Example to obtain what it is looked for by simply modifying the updateMatrix() method in orbittransformcontroller.cpp:
void OrbitTransformController::updateMatrix()
{
m_matrix.setToIdentity();
// Move to the origin point of the rotation
m_matrix.translate(40, 0.0f, -200);
// Infinite 360° rotation
m_matrix.rotate(m_angle, QVector3D(0.0f, 1.0f, 0.0f));
// Radius of the rotation
m_matrix.translate(m_radius, 0.0f, 0.0f);
m_target->setMatrix(m_matrix);
}
Note: It is easier to change the torus into a small sphere to observe the rotation.
EDIT from question asker: This idea is indeed a very good way of solving the problem! To apply it specifically to my scenario the updateMatrix() function has to look like this:
void OrbitTransformController::updateMatrix()
{
//take the existing matrix to not lose any previous transformations
m_matrix = m_target->matrix();
// Move to the origin point of the rotation, _rotationOrigin would be a member variable
m_matrix.translate(_rotationOrigin);
// rotate (around the y axis)
m_matrix.rotate(m_angle, QVector3D(0.0f, 1.0f, 0.0f));
// translate back
m_matrix.translate(-_rotationOrigin);
m_target->setMatrix(m_matrix);
}
I have made _rotationOrigin also a property in the controller class, which can then be set externally to a different value for each controller.

Related

Rotating 2D camera to space ship's heading in OpenGL (OpenTK)

The game is a top-down 2D space ship game -- think of "Asteroids."
Box2Dx is the physics engine and I extended the included DebugDraw, based on OpenTK, to draw additional game objects. Moving the camera so it's always centered on the player's ship and zooming in and out work perfectly. However, I really need the camera to rotate along with the ship so it's always facing in the same direction. That is, the ship will appear to be frozen in the center of the screen and the rest of the game world rotates around it as it turns.
I've tried adapting code samples, but nothing works. The best I've been able to achieve is a skewed and cut-off rendering.
Render loop:
// Clear.
Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT);
// other rendering omitted (planets, ships, etc.)
this.OpenGlControl.Draw();
Update view -- centers on ship and should rotate to match its angle. For now, I'm just trying to rotate it by an arbitrary angle for a proof of concept, but no dice:
public void RefreshView()
{
int width = this.OpenGlControl.Width;
int height = this.OpenGlControl.Height;
Gl.glViewport(0, 0, width, height);
Gl.glMatrixMode(Gl.GL_PROJECTION);
Gl.glLoadIdentity();
float ratio = (float)width / (float)height;
Vec2 extents = new Vec2(ratio * 25.0f, 25.0f);
extents *= viewZoom;
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0, 0, 0);
Vec2 lower = this.viewCenter - extents;
Vec2 upper = this.viewCenter + extents;
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
}
Now, I'm obviously doing this wrong. Degrees of 0 and 180 will keep it right-side-up or flip it, but any other degree will actually zoom it in/out or result in only blackness, nothing rendered. Below are examples:
If ship angle is 0.0f, then game world is as expected:
Degree of 180.0f flips it vertically... seems promising:
Degree of 45 zooms out and doesn't rotate at all... that's odd:
Degree of 90 returns all black. In case you've never seen black:
Please help!
Firstly the 2-4 arguments are the axis, so please state them correctly as stated by #pingul.
More importantly the rotation is applied to the projection matrix.
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
In this line your Orthogonal 2D projection matrix is being multiplied with the previous rotation and applied to your projection matrix. Which I believe is not what you want.
The solution would be move your rotation call to a place after the model view matrix mode is selected, as below
// L/R/B/T
Glu.gluOrtho2D(lower.X, upper.X, lower.Y, upper.Y);
Gl.glMatrixMode(Gl.GL_MODELVIEW);
// rotate the view
var shipAngle = 180.0f; // just a test angle for proof of concept
Gl.glRotatef(shipAngle, 0.0f, 0.0f, 1.0f);
And now your rotations will be applied to the model-view matrix stack. (I believe this is the effect you want). Keep in mind that glRotatef() creates a rotation matrix and multiplies it with the matrix at the top of the selected stack stack.
I would also strongly suggest you move away from fixed function pipeline if possible as suggested by #BDL.

How to get the centre of an object in direct x 11 C++

How to I obtain the centre of the objects I'm drawing in direct x 11? I'm having a problem when rotating the objects that they're rotating about their position, but their position is from one of the corners, resulting in a circular motion when rotating. I'm trying to rotate about the cente (origin)
I'm translating before rotating using the following:
setObjectWorld(XMMatrixIdentity());
setTranslation(XMMatrixTranslation(getPositionX(), getPositionY(), getPositionZ()));
setRotationAxis(XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f));
setRotation(XMMatrixRotationAxis(getRotationAxisVector(), delta));
setObjectWorld(getTranslationMatrix() * getRotationMatrix());
The outcome is this:
Rotation/scale should be done before translation

Issue with GLM Camera X,Y Rotation introducing Z Rotation

So I've been having trouble with a camera I've implemented in OpenGL and C++ using the GLM library. The type of camera I'm aiming for is a fly around camera which will allow easy exploration of a 3D world. I have managed to get the camera pretty much working, it's nice and smooth, looks around and the movement seems to be nice and correct.
The only problem I seem to have is that the rotation along the camera's X and Y axis (looking up and down) introduces some rotation about it's Z axis. This has the result of causing the world to slightly roll whilst travelling about.
As an example... if I have a square quad in front of the camera and move the camera in a circular motion, so as if looking around in a circle with your head, once the motion is complete the quad will have rolled slightly as if you've tilted your head.
My camera is currently a component which I can attach to an object/entity in my scene. Each entity has a "Frame" which is basically the model matrix for that entity. The Frame contains the following attributes:
glm::mat4 m_Matrix;
glm::vec3 m_Position;
glm::vec3 m_Up;
glm::vec3 m_Forward;
These are then used by the camera to create the appropriate viewMatrix like this:
const glm::mat4& CameraComponent::GetViewMatrix()
{
//Get the transform of the object
const Frame& transform = GetOwnerGO()->GetTransform();
//Update the viewMatrix
m_ViewMatrix = glm::lookAt(transform.GetPosition(), //position of camera
transform.GetPosition() + transform.GetForward(), //position to look at
transform.GetUp()); //up vector
//return reference to the view matrix
return m_ViewMatrix;
}
And now... here are my rotate X and Y methods within the Frame object, which I'm guessing is the place of the problem:
void Frame::RotateX( float delta )
{
glm::vec3 cross = glm::normalize(glm::cross(m_Up, m_Forward)); //calculate x axis
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, cross);
m_Forward = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Forward, 0.0f))); //Rotate forward vector by new rotation
m_Up = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Up, 0.0f))); //Rotate up vector by new rotation
}
void Frame::RotateY( float delta )
{
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, m_Up);
//Rotate forward vector by new rotation
m_Forward = glm::normalize(glm::vec3(Rotation * glm::vec4(m_Forward, 0.0f)));
}
So somewhere in there, there's a problem which I've been searching around trying to fix. I've been messing with it for a few days now, trying random things but I either get the same result, or the z axis rotation is fixed but other bugs appear such as incorrect X, Y rotation and camera movement.
I had a look at gimbal lock but from what I understood of it, this problem didn't seem quite like gimbal lock to me. But I may be wrong.
Store the current pitch/yaw angles and generate the camera matrix on-the-fly instead of trying to accumulate small changes on the intermediate vectors.
In your RotateY function, change it from this:
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, m_Up);
to this:
glm::mat4 Rotation = glm::rotate(glm::mat4(1.0f), delta, glm::vec3(0,1,0));

Camera rotation (OpenGL)

I am having trouble with a camera class I am trying to use in my program. When I change the camera_target of the gluLookAt call, my whole terrain is rotating instead of just the camera rotating like it should.
Here is some code from my render method:
camera->Place();
ofSetColor(255, 255, 255, 255);
//draw axis lines
//x-axis
glBegin(GL_LINES);
glColor3f(1.0f,0.0f,0.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(100.0f, 0.0f,0.0f);
glEnd();
//y-axis
glBegin(GL_LINES);
glColor3f(0.0f,1.0f,0.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(0.0f, 100.0f,0.0f);
glEnd();
//z-axis
glBegin(GL_LINES);
glColor3f(0.0f,0.0f,1.0f);
glVertex3f(0.0f,0.0f,0.0f);
glVertex3f(0.0f, 0.0f,100.0f);
glEnd();
glColor3f(1,1,1);
terrain->Draw();
And the rotate and place methods from my camera class:
void Camera::RotateCamera(float h, float v){
hRadians += h;
vRadians += v;
cam_target.y = cam_position.y+(float)(radius*sin(vRadians));
cam_target.x = cam_position.x+(float)(radius*cos(vRadians)*cos(hRadians));
cam_target.z = cam_position.z+(float)(radius*cos(vRadians)*sin(hRadians));
cam_up.x = cam_position.x-cam_target.x;
cam_up.y = ABS(cam_position.y+(float)(radius*sin(vRadians+PI/2))) ;
cam_up.z = cam_position.z-cam_target.z;
}
void Camera::Place() {
//position, camera target, up vector
gluLookAt(cam_position.x, cam_position.y, cam_position.z, cam_target.x, cam_target.y, cam_target.z, cam_up.x, cam_up.y, cam_up.z);
}
The problem is that the whole terrain is moving around the camera, whereas the camera should just be rotating.
Thanks for any help!
EDIT - Found some great tutorials and taking into account the answers on here, I make a better camera class. Thanks guys
From the POV of the terrain, yes, the camera is rotating. But, since your view is from the POV of the camera, when you rotate the camera, it appears that the terrain is rotating. This is the behavior that gluLookAt() is intended to produce. If there is something else that you expected, you will need to rotate only the geometry that you want rotated, and not try to rotate using gluLookAt().
Update 1: Based on the discussion below, try this:
void Camera::RotateCamera(float h, float v)
{
hRadians += h;
vRadians += v;
cam_norm.x = cos(vRadians) * sin(hRadians);
cam_norm.y = -sin(vRadians);
cam_norm.z = cos(vRadians) * sin(hRadians);
cam_up.x = sin(vRadians) * sin(hRadians);
cam_up.y = cos(vRadians);
cam_up.z = sin(vRadians) * cos(hRadians);
}
void Camera::Place()
{
//position, camera target, up vector
gluLookAt(cam_pos.x, cam_pos.y, cam_pos.z,
cam_pos.x+cam_norm.x, cam+pos.y+cam_norm.y, camp_pos.z+cam_norm.z,
cam_up.x, cam_up.y, cam_up.z);
}
Separating the camera normal (the direction the camera is looking) from the position allows you to independently change the position, pan and tilt... that is, you can change the position without having to recompute the normal and up vectors.
Disclaimer: This is untested, just what I could do on the back of a sheet of paper. It assumes a right handed coordinate system and that pan rotation is applied before tilt.
Update 2: Derived using linear algebra rather than geometry... again, untested, but I have more confidence in this.
Well, rotating the camera or orbiting the terrain around the camera looks essentially the same.
If you want to orbit the camera around a fixed terrain point you have to modify the camera position, not the target.
Should it not be?:
cam_up.x = cam_target.x - cam_position.x;
cam_up.y = ABS(cam_position.y+(float)(radius*sin(vRadians+PI/2))) ;
cam_up.z = cam_target.z - cam_position.z;
Perhaps you should normalize cam_up as well.
HTH

Rotation of camera used for perspective projection

I've just started playing with OpenGl to render a number of structure each comprising a number of polygon.
Basically I want to perform the equivalent of setting a camera at (0,0,z) in the world (structure) coordinates and rotate it about the x,y and z-axes of the world axes (in that order!) to render a view of each structure (as I understand it it common practice to do use the inverse camera matrix). Thus as I understand it I need to translate (to world origin i.e. (0,0,-z)) * rotateZrotateYrotateX * translate (re-define world origin see below)
So I think I need something like:
//Called when the window is resized
void handleResize(int w, int h) {
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(9.148, (double)w / (double)h, 800.0, 1500.0);
}
float _Zangle = 10.0f;
float _cameraAngle = 90.0f;
//Draws the 3D scene
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
glTranslatef(0.0f, 0.0f, -z); //Move forward Z (mm) units
glRotatef(-_Zangle, 0.0f, 0.0f, 1.0f); //Rotate "camera" about the z-axis
glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the "camera" by camera_angle about y-axis
glRotatef (90.0f,1.0f,0.0f,0.0f); // rotate "camera" by 90 degrees about x-axis
glTranslatef(-11.0f,189.0f,51.0f); //re-define origin of world coordinates to be (11,-189,-51) - applied to all polygon vertices
glPushMatrix(); //Save the transformations performed thus far
glBegin(GL_POLYGON);
glVertex3f(4.91892,-225.978,-50.0009);
glVertex3f(5.73534,-225.978,-50.0009);
glVertex3f(6.55174,-225.978,-50.0009);
glVertex3f(7.36816,-225.978,-50.0009);
.......// etc
glEnd();
glPopMatrix();
However when I compile and run this the _angle and _cameraAngle seem to be reversed i.e. _angle seems to rotate about y-axis (Vertical) of Viewport and _cameraAngle about z-axis (into plane of Viewport)? What am I doing wrong?
Thanks for taking the time to read this
The short answer is: Use gluLookAt(). This utility function creates the proper viewing matrix.
The longer answer is that each OpenGL transformation call takes the current matrix and multiplies it by a matrix built to accomplish the transformation. By calling a series of OpenGL transformation function you build one transformation matrix that will apply the combination of transformations. Effectively, the matrix will be M = M1 * M2 * M3 . . . Mathematically, the transformations are applied from right to left in the above equation.
Your code doesn't move the camera. It stays at the origin, and looks down the negative z-axis. Your transformations move everything in model space to (11,-189,-51), rotates everything 90 degrees about the x-axis, rotates everything 90 degrees about the y-axis, rotates everything 10 degrees about the z-axis, then translates everything -z along the z-axis.
EDIT: More information
I'm a little confused about what you want to accomplish, but I think you want to have elements at the origin, and have the camera look at those elements. The eye coordinates would be where you want the camera, and the center coordinates would be where you want the objects to be. I'd use a little trigonometry to calculate the position of the camera, and point it at the origin.
In this type of situation I usually keep track of camera position using longitude, latitude, and elevation centered on the origin. Calculating x,y,z for the eye coordinates is simplyx = elv * cos(lat) * sin(lon), y = elv * sin(lat), z = elv * cos(lat) * cos(lat).
My gluLookAt call would be gluLookAt(x, y, z, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
You could rotate the up on the camera by changing the last three coordinates for gluLookAt.
The z axis is coming from the center of the monitor into you. So, rotating around the z-axis should make the camera spin in place (like a 2D rotation on just the xy plane). I can't tell, but is that what's happening here?
It's possible that you are encountering Gimbal Lock. Try removing one of the rotations and see if things work the way they should.
While it's true that you can't actually move the camera in OpenGL, you can simulate camera motion by moving everything else. This is why you hear about the inverse camera matrix. Instead of moving the camera by (0, 0, 10), we can move everything in the world by (0, 0, -10). If you expand those out into matrices, you will find that they are inverses of each other.
I also noticed that, given the code presented, you don't need the glPushMatrix()/glPopMatrix() calls. Perhaps there is code that you haven't shown that requires them.
Finally, can you provide an idea of what it is you are trying to render? Debugging rotations can be hard without some context.
Short answer :Good tip
Longer answer: Yes the order of matrix multiplication is clear... that's what I meant by inverse camera matrix to indicate moving all the world coordinates of structures into the camera coordinates (hence the use of "camera" in my comments ;-)) instead of actually translating and rotating camera into the world coordinates.
So if I read between the lines correctly you suggest something like:
void drawScene() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
gluLookAt(0.0,0.0,z,11.0,-189.0,-51.0,0.0,1.0,0.0); //eye(0,0,z) look at re-defined world origin(11,-189,-51) and up(0.0,1.0,0.0)
glRotatef(-_Zangle, 0.0f, 0.0f, 1.0f); //Rotate "camera" (actually structures) about the z-axis
glRotatef(-_cameraAngle, 0.0f, 1.0f, 0.0f); //Rotate the "camera" (actually structures!) by camera_angle about y-axis
glRotatef (90.0f,1.0f,0.0f,0.0f); // rotate "camera" (actually structures) by 90 degrees about x-axis
glPushMatrix();
Or am I still missing something?
I think you are mixing axes of your world with axes of the camera,
GLRotatef only uses axes of the camera, they are not the same as your the world axes once the camera is rotated.