I am trying to code a game in C++ using SFML, in which several space ships attack a planet from different sides. I want the space ships that attack the planet to face the planet while they attack (obviously). The problem I am having is figuring out a formula that would work for every x and y position. If the sprite for the ship starts off by facing upwards, what would be the best way to approach this?
For example, if the ship spawns on the right side of the planet, the ship sprite should rotate to the left, towards the center of the planet.
Below is the math for rotation, which is what you seem to be asking for, however you're probably better off simply using SFML's api for rotating entities: http://www.sfml-dev.org/tutorials/2.0/graphics-transform.php.
Rotating a point around the origin can be represented as a matrix multiplication with the following matrix:
[[ cos(a), sin(a), 0],
[-sin(a), cos(a), 0],
[ 0, 0, 1]]
where a is the angle to rotate by.
(This can be derived from the trig identities cos(a)sin(b) + cos(b)sin(a) = sin(a + b) and cos(a)cos(b) - sin(a)sin(b) = cos(a + b))
Translating a point so that it is positioned where you want to rotate relative to the origin can also be represented as a matrix multiplication, and of course translating it back can be as well.
To translate a point by a in the horizontal direction and b in the vertical direction you multiply your point [x, y, 1] by the matrix
[[1, 0, 0],
[0, 1, 0],
[a, b, 1]]
This produces the matrix [x+a, y+b, 1], which is the translated point (x+a, y+b).
To translate a point in preparation for rotation around a point (Cx, Cy), you first translate it by -Cx in the horizontal direction and -Cy in the vertical direction. The matrix for that is [[1, 0, 0], [0, 1, 0], [-Cx, -Cy, 1]]. After the rotation you translate by the same amounts in the opposite directions, which is the same as multiplying by the inverse matrix: [[1, 0, 0], [0, 1, 0], [Cx, Cy, 1]].
Say your initial point is p, a matrix A translates it to be relative to the origin for rotation, a matrix R rotates about the origin. Then your rotated point is
((p × A) × R) × A-1
Since matrix multiplication is associative you can rearrange this to:
p × ((A × R) × A-1)
Which means you can compute the matrix ARA-1 and then multiply any point by that matrix to rotate that point around a common center. So for each object you want to rotate, compute the matrix that rotates points around the object's center point, and then rotate every point of that object using the matrix.
When you combine the translation matrices with the rotation matrix you get a single matrix:
[[ cos(a), sin(a), 0],
[ -sin(a), cos(a), 0],
[-cos(a) * Cx + Cx + Cy * sin(a), -cos(a) * Cy + Cy - Cx * sin(a), 1]]
Multiplying a point (x, y) by this matrix produces a point with the x coordinate:
x × cos(a) - y × sin(a) + (-cos(a) × Cx + Cx + Cy × sin(a))
and the y coordinate:
x × sin(a) + y × cos(a) + (-cos(a) × Cy + Cy - Cx × sin(a))
To try this out, say we want to rotate points around (1, 1). If we rotate the point (2, 1) by pi/2 (90° counter clockwise) we should get the point (1, 2), and rotating the point (2, 1) should give us (0, 1).
Plugging in the coordinates (2, 1) and the angle pi/2 into the equation for the rotated x coordinates produces:
2 * cos(pi/2) - 1 * sin(pi/2) + -cos(pi/2) * 1 + 1 + 1 * sin(pi/2) = 1
and plugging them into the equation for the y coordinate produces:
2 * sin(pi/2) + 1 * cos(pi/2) + (-cos(pi/2) * 1 + 1 - 1 * sin(pi/2)) = 2
Which means the point (2, 1) rotated 90° around (1, 1) is the point (1, 2), just as expected.
Then pluggin in the point (1, 2):
1 * cos(pi/2) - 2 * sin(pi/2) + -cos(pi/2) * 1 + 1 + 1 * sin(pi/2) = 0
2 * sin(pi/2) + 1 * cos(pi/2) + (-cos(pi/2) * 1 + 1 - 1 * sin(pi/2)) = 1
produces the point (0, 1) which is, again, just as expected.
Denote the position of the planet as (xP,yP) and the position of the ship as (xS,yS) with its point facing upwards, direction (0,1)' (the prime denoting transposition to a column vector). To have the ship pointing at the planet, we need it pointing in the opposite direction of the difference vector
(x,y) = (xS-xP, yS-yP).
More precisely, the normalized direction of the point should be (-x/r,-y/r)', using r=sqrt(x^2+y^2). The rotation of the direction vector (0,1)' to (-x/r,-y/r)' is performed using the rotation matrix
( -y/r -x/r ) ( 0 ) ( -x/r )
( x/r -y/r ) * ( 1 ) = ( -y/r )
for the convention matrix-vector-product. If it is vector-matrix product, use the transpose of it.
Use that if specifying the matrix directly. When using rotation angles, the angle of that rotation is atan2(x,-y) in radians.
Related
I have two questions. (I marked 1, 2 below)
In OpenGl, the clipping is done by sutherland-hodgman.
However, I wonder how to work sutherland-hodgman algorithm in homogeneous system (4D)
I made a situation.
In VCS, there is a line, R= (0, 3, -2, 1), S = (0, 0, 1, 1) (End points of the line)
And a frustum is right = 1, left = -1, near = 1, far = 3, top = 4, bottom = -4
Therefore, the projection matrix P is
1 0 0 0
0 1/4 0 0
0 0 -2 -3
0 0 -1 0
If we calculate the line with the P, then the each end points is like that
R' = (0, 3/4, 1, 2), S' = (0, 0, -5, -1)
I know that perspective division should not be done now, because if we do perspective division, the clipping result is not correct.
Here I am curious. What makes a correct clipping because we did not just do perspective division. What mathematical properties are here?
How to calculate the clipping result in above situation?
(The fact that two intersections occur in the w-y coordinate system makes me confused. I thought the result line is one, not divided two parts)
I'm not quite sure whether you understood the sutherland-hodgman algorithm correctly (or at least I didn't get your example). Thus I will prove here, that it doesn't make any difference whether clipping happens before or after the perspective divide. The proof is only shown for one plane (clipping has to be done against all 6 planes), since applying multiple such clipping operations after each other makes not difference here.
Let's assume we have two points (as you described) R' and S' in clip space. And we have a clipping plane P given in hessian normal form [n, p] (if we take the left plane this is [1,0,0,1]).
If we would be calculating in pure 3d space (R^3), then checking whether a line crosses this plane would be done by calculating the signed distance of both points to the plane and checking if the sign is different. The signed distance for a point X = [x/w,y/w,z/w] is given by
D = dot(n, X) + p
Let's write down the actual equation we have (including the perspective divide):
d = n_x * x/w + n_y * y/w + n_z * z/w + p
In order to find the exact intersection point, we would, again in R^3 space, calculate for both points (A = R'/R'w, B = S'/S'w) the distance to the plane (da, db) and perform a linear interpolation (I will only write the equations for the x-coordinate here since y and z are working similar):
x = A_x * (1 - da/(da - db)) + A_y * (da/(da-db))
x = R'x/R'w * (1 - da/(da - db)) + S'x/S'w * (da/(da-db))
And w = 1 (since we interpolate between two points both having w = 1)
Now we already know from the previous discussion, that clipping has to happen before the perspective divide, thus we have to adapt this equation. This means, that for each point, the clipping cube has a different scaling w. Lt's see what happens when we try to perform the the same operations in P^3 (before the perspective divide):
First, we "revert" the perspective divide to get to X=[x,y,z,w] for which the distance to the plane is given by
d = n_x * x/w + n_y * y/w + n_z * z/w + p
d = (n_x * x + n_y * y + n_z * z) / w + p
d * w = n_x * x + n_y * y + n_z * z + p * w
d * w = dot([n, p], [x,y,z,w])
d * w = dot(P, X)
Since we are only interested in the sign of the whole calculation, which we haven't changed by our operations, we can compare the D*ws and get the same inside-out result as in R^3.
For the two points R' and S', the calculated distances in P^3 are dr = da * R'w and ds = db * S'w. When we now use the same interpolation equation as above but for R' and S' we get for x:
x' = R'x * (1 - (da * R'w)/(da * R'w - db * S'w)) + S'x * (da * R'w)/(da * R'w - db * S'w)
On the first view this looks rather different from the result we got in R^3, but since we are still in P^3 (thus x'), we still have to do the perspective divide on the result (this is allowed here, since the interpolated point will always be at the border of the view-frustum and thus dividing by w will not introduce any problems). The interpolated w component is given as:
w' = R'w * (1 - (da * R'w)/(da * R'w - db * S'w)) + S'w * (da * R'w)/(da * R'w - db * S'w)
And when calculating x/w we get
x = x' / w';
x = R'x/R'w * (1 - da/(da - db)) + S'x/S'w * (da/(da-db))
which is exactly the same result as when calculating everything in R^3.
Conclusion: The interpolation gives the same result, no matter if we perform the perspective divide first and interpolation afterwards or interpolating first and dividing then. But with the second variant we avoid the problem with points flipping from behind the viewer to the front since we are only dividing points that are guaranteed to be inside (or on the border) of the viewing frustum.
You speak of polygon clipping in a homogeneous system (4D) but from your question I assume that you actually mean homogeneous coordinates, which makes a lot more sense. (There are many possible homogenous systems.)
Ok, so you want to use "4D" coordinates, which are really "3D coordinates and a w term". The w term represents (projection transformations) the projective term that partially relates the screen-space coordinate to the original world space position. Assuming that you are NOT interested in projective space clipping, this term is not relevant.
I'm assuming this because the clipping box you describe is axis-aligned on planes in 3D. Even if it was rotated or scaled in 3D space, each of the planes would still be a 3D plane, the 4th coordinate always being '1'.
So how to clip:
clip line segment L against each of the planes of the clipping box, i.e. 6 clipping planes in total (you describe the normals of each clipping plane aptly), and see if any intersection point v is shared by the line and the tested plane P so that
v lies on the line segment (i.e. a t between 0 and 1)
v lies within the bounds of the plane P (i.e. the coordinate should not lie beyond any of the adjacent planes. Since you are using axis-aligned clipping planes, this is easy to check.)
Any of these intersections between a (3D + w) line and one of the 3D planes occurs in 3D, and intersection points have to be a 3D coordinates. You can extend each of these coordinates with a 4th w coordinate into a "4D" coordinate so that you can further transform them using 4x4 matrices for view and projection processing.
What are the best algorithms (and explanations) for representing and rotating the pieces of a tetris game? I always find the piece rotation and representation schemes confusing.
Most tetris games seem to use a naive "remake the array of blocks" at each rotation:
http://www.codeplex.com/Project/ProjectDirectory.aspx?ProjectSearchText=tetris
However, some use pre-built encoded numbers and bit shifting to represent each piece:
http://www.codeplex.com/wintris
Is there a method to do this using mathematics (not sure that would work on a cell based board)?
When I was trying to figure out how rotations would work for my tetris game, this was the first question that I found on stack overflow. Even though this question is old, I think my input will help others trying to figure this out algorithmically. First, I disagree that hard coding each piece and rotation will be easier. Gamecat's answer is correct, but I wanted to elaborate on it. Here are the steps I used to solve the rotation problem in Java.
For each shape, determine where its origin will be. I used the points on the diagram from this page to assign my origin points. Keep in mind that, depending on your implementation, you may have to modify the origin every time the piece is moved by the user.
Rotation assumes the origin is located at point (0,0), so you will have to translate each block before it can be rotated. For example, suppose your origin is currently at point (4, 5). This means that before the shape can be rotated, each block must be translated -4 in the x-coordinate and -5 in the y-coordinate to be relative to (0,0).
In Java, a typical coordinate plane starts with point (0,0) in the upper left most corner and then increases to the right and down. To compensate for this in my implementation, I multiplied each point by -1 before rotation.
Here are the formulae I used to figure out the new x and y coordinate after a counter-clockwise rotation. For more information on this, I would check out the Wikipedia page on Rotation Matrix. x' and y' are the new coordinates:
x' = x * cos(PI/2) - y * sin(PI/2) and y' = x * sin(PI/2) + y * cos(PI/2)
.
For the last step, I just went through steps 2 and 3 in reverse order. So I multiplied my results by -1 again and then translated the blocks back to their original coordinates.
Here is the code that worked for me (in Java) to get an idea of how to do it in your language:
public synchronized void rotateLeft(){
Point[] rotatedCoordinates = new Point[MAX_COORDINATES];
for(int i = 0; i < MAX_COORDINATES; i++){
// Translates current coordinate to be relative to (0,0)
Point translationCoordinate = new Point(coordinates[i].x - origin.x, coordinates[i].y - origin.y);
// Java coordinates start at 0 and increase as a point moves down, so
// multiply by -1 to reverse
translationCoordinate.y *= -1;
// Clone coordinates, so I can use translation coordinates
// in upcoming calculation
rotatedCoordinates[i] = (Point)translationCoordinate.clone();
// May need to round results after rotation
rotatedCoordinates[i].x = (int)Math.round(translationCoordinate.x * Math.cos(Math.PI/2) - translationCoordinate.y * Math.sin(Math.PI/2));
rotatedCoordinates[i].y = (int)Math.round(translationCoordinate.x * Math.sin(Math.PI/2) + translationCoordinate.y * Math.cos(Math.PI/2));
// Multiply y-coordinate by -1 again
rotatedCoordinates[i].y *= -1;
// Translate to get new coordinates relative to
// original origin
rotatedCoordinates[i].x += origin.x;
rotatedCoordinates[i].y += origin.y;
// Erase the old coordinates by making them black
matrix.fillCell(coordinates[i].x, coordinates[i].y, Color.black);
}
// Set new coordinates to be drawn on screen
setCoordinates(rotatedCoordinates.clone());
}
This method is all that is needed to rotate your shape to the left, which turns out to be much smaller (depending on your language) than defining each rotation for every shape.
There is a limited amount of shapes, so I would use a fixed table and no calculation. That saves time.
But there are rotation algorithms.
Chose a centerpoint and rotate pi/2.
If a block of a piece starts at (1,2) it moves clockwise to (2,-1) and (-1,-2) and (-1, 2).
Apply this for each block and the piece is rotated.
Each x is the previous y and each y - the previous x. Which gives the following matrix:
[ 0 1 ]
[ -1 0 ]
For counterclockwise rotation, use:
[ 0 -1 ]
[ 1 0 ]
This is how I did it recently in a jQuery/CSS based tetris game.
Work out the centre of the block (to be used as a pivot point), i.e. the centre of the block shape.
Call that (px, py).
Each brick that makes up the block shape will rotate around that point.
For each brick, you can apply the following calculation...
Where each brick's width and height is q, the brick's current location (of the upper left corner) is (x1, y1) and the new brick location is (x2, y2):
x2 = (y1 + px - py)
y2 = (px + py - x1 - q)
To rotate the opposite direction:
x2 = (px + py - y1 - q)
y2 = (x1 + py - px)
This calculation is based on a 2D affine matrix transformation.
If you are interested in how I got to this let me know.
Personally I've always just represented the rotations by hand - with very few shapes, it's easy to code that way. Basically I had (as pseudo-code)
class Shape
{
Color color;
ShapeRotation[] rotations;
}
class ShapeRotation
{
Point[4] points;
}
class Point
{
int x, y;
}
At least conceptually - a multi-dimensional array of points directly in shape would do the trick too :)
You can rotate a matrix only by applying mathematical operations to it. If you have a matrix, say:
Mat A = [1,1,1]
[0,0,1]
[0,0,0]
To rotate it, multiply it by its transpose and then by this matrix ([I]dentity [H]orizontaly [M]irrored):
IHM(A) = [0,0,1]
[0,1,0]
[1,0,0]
Then you'll have:
Mat Rotation = Trn(A)*IHM(A) = [1,0,0]*[0,0,1] = [0,0,1]
[1,0,0] [0,1,0] = [0,0,1]
[1,1,0] [1,0,0] = [0,1,1]
Note: Center of rotation will be the center of the matrix, in this case at (2,2).
Representation
Represent each piece in the minimum matrix where 1's represent spaces occupied by the tetriminoe and 0's represent empty space. Example:
originalMatrix =
[0, 0, 1]
[1, 1, 1]
Rotation Formula
clockwise90DegreesRotatedMatrix = reverseTheOrderOfColumns(Transpose(originalMatrix))
anticlockwise90DegreesRotatedMatrix = reverseTheOrderOfRows(Transpose(originalMatrix))
Illustration
originalMatrix =
x y z
a[0, 0, 1]
b[1, 1, 1]
transposed = transpose(originalMatrix)
a b
x[0, 1]
y[0, 1]
z[1, 1]
counterClockwise90DegreesRotated = reverseTheOrderOfRows(transposed)
a b
z[1, 1]
y[0, 1]
x[0, 1]
clockwise90DegreesRotated = reverseTheOrderOfColumns(transposed)
b a
x[1, 0]
y[1, 0]
z[1, 1]
Since there are only 4 possible orientations for each shape, why not use an array of states for the shape and rotating CW or CCW simply increments or decrements the index of the shape state (with wraparound for the index)? I would think that might be quicker than performing rotation calculations and whatnot.
I derived a rotation algorithm from matrix rotations here. To sum it up: If you have a list of coordinates for all cells that make up the block, e.g. [(0, 1), (1, 1), (2, 1), (3, 1)] or [(1, 0), (0, 1), (1, 1), (2, 1)]:
0123 012
0.... 0.#.
1#### or 1###
2.... 2...
3....
you can calculate the new coordinates using
x_new = y_old
y_new = 1 - (x_old - (me - 2))
for clockwise rotation and
x_new = 1 - (y_old - (me - 2))
y_new = x_old
for counter-clockwise rotation. me is the maximum extent of the block, i.e. 4 for I-blocks, 2 for O-blocks and 3 for all other blocks.
If you're doing this in python, cell-based instead of coordinate pairs it's very simple to rotate a nested list.
rotate = lambda tetrad: zip(*tetrad[::-1])
# S Tetrad
tetrad = rotate([[0,0,0,0], [0,0,0,0], [0,1,1,0], [1,1,0,0]])
If we assume that the central square of the tetromino has coordinates (x0, y0) which remains unchanged then the rotation of the other 3 squares in Java will look like this:
private void rotateClockwise()
{
if(rotatable > 0) //We don't rotate tetromino O. It doesn't have central square.
{
int i = y1 - y0;
y1 = (y0 + x1) - x0;
x1 = x0 - i;
i = y2 - y0;
y2 = (y0 + x2) - x0;
x2 = x0 - i;
i = y3 - y0;
y3 = (y0 + x3) - x0;
x3 = x0 - i;
}
}
private void rotateCounterClockwise()
{
if(rotatable > 0)
{
int i = y1 - y0;
y1 = (y0 - x1) + x0;
x1 = x0 + i;
i = y2 - y0;
y2 = (y0 - x2) + x0;
x2 = x0 + i;
i = y3 - y0;
y3 = (y0 - x3) + x0;
x3 = x0 + i;
}
}
for 3x3 sized tetris pieces
flip x and y of your piece
then swap the outer columns
that's what I figured out some time
I have used a shape position and set of four coordinates for the four points in all the shapes. Since it's in 2D space, you can easy apply a 2D rotational matrice to the points.
The points are divs so their css class is turned from off to on. (this is after clearing the css class of where they were last turn.)
If array size is 3*3 ,than the simplest way to rotate it for example in anti-clockwise direction is:
oldShapeMap[3][3] = {{1,1,0},
{0,1,0},
{0,1,1}};
bool newShapeMap[3][3] = {0};
int gridSize = 3;
for(int i=0;i<gridSize;i++)
for(int j=0;j<gridSize;j++)
newShapeMap[i][j] = oldShapeMap[j][(gridSize-1) - i];
/*newShapeMap now contain:
{{0,0,1},
{1,1,1},
{1,0,0}};
*/
Python:
pieces = [
[(0,0),(0,1),(0,2),(0,3)],
[(0,0),(0,1),(1,0),(1,1)],
[(1,0),(0,1),(1,1),(1,2)],
[(0,0),(0,1),(1,0),(2,0)],
[(0,0),(0,1),(1,1),(2,1)],
[(0,1),(1,0),(1,1),(2,0)]
]
def get_piece_dimensions(piece):
max_r = max_c = 0
for point in piece:
max_r = max(max_r, point[0])
max_c = max(max_c, point[1])
return max_r, max_c
def rotate_piece(piece):
max_r, max_c = get_piece_dimensions(piece)
new_piece = []
for r in range(max_r+1):
for c in range(max_c+1):
if (r,c) in piece:
new_piece.append((c, max_r-r))
return new_piece
In Ruby, at least, you can actually use matrices. Represent your piece shapes as nested arrays of arrays like [[0,1],[0,2],[0,3]]
require 'matrix'
shape = shape.map{|arr|(Matrix[arr] * Matrix[[0,-1],[1,0]]).to_a.flatten}
However, I agree that hard-coding the shapes is feasible since there are 7 shapes and 4 states for each = 28 lines and it will never be any more than that.
For more on this see my blog post at
https://content.pivotal.io/blog/the-simplest-thing-that-could-possibly-work-in-tetris and a completely working implementation (with minor bugs) at https://github.com/andrewfader/Tetronimo
In Java:
private static char[][] rotateMatrix(char[][] m) {
final int h = m.length;
final int w = m[0].length;
final char[][] t = new char[h][w];
for(int y = 0; y < h; y++) {
for(int x = 0; x < w; x++) {
t[w - x - 1][y] = m[y][x];
}
}
return t;
}
A simple Tetris implementation as a single-page application in Java:
https://github.com/vadimv/rsp-tetris
I must be the worst person on the planet when it comes to math because i can't figure out how to change this circle radius:
from math import *
posx, posy = 0,0
sides = 32
glBegin(GL_POLYGON)
for i in range(100):
cosine=cos(i*2*pi/sides)+posx
sine=sin(i*2*pi/sides)+posy
glVertex2f(cosine,sine)
I'm not entirely sure how or why this becomes a circle because the *2 confuses me a bit.
Note that this is done in Pyglet under Python2.6 calling OpenGL libraries.
Followed Example 4-1: http://fly.cc.fer.hr/~unreal/theredbook/chapter04.html
Clarification: This works, i'm interested in why and how to modify the radius.
This should do the trick :)
from math import *
posx, posy = 0,0
sides = 32
radius = 1
glBegin(GL_POLYGON)
for i in range(100):
cosine= radius * cos(i*2*pi/sides) + posx
sine = radius * sin(i*2*pi/sides) + posy
glVertex2f(cosine,sine)
But I would pick another names for variables. cosine and sine is not exactly what these variables are.
And as far as I see, you son't need a loop from 1 to 100 (or from 0 to 99, I'm not too good at Python), you just need a loop from 1 to sides.
Explanation:
When you calculate
x = cos (angle)
y = sin(angle)
you get a point on a circle with radius = 1, and centre in the point (0; 0) (because sin^2(angle) + cos^2(angle) = 1).
If you want to change a radius to R, you simply multiply cos and sin by R.
x = R * cos (angle)
y = R * sin(angle)
If you want to transfer the circle to another location (for example, you want the circle to have it's centre at (X_centre, Y_centre), you add X_centre and Y_xentre to x and y accordingly:
x = R * cos (angle) + X_centre
y = R * sin(angle) + Y_centre
When you need to loop through N points (in your case N = sides) on your circle, you should change the angle on each iteration. All those angles should be equal and their sum should be 2 * pi. So each angle should be equal to 2 * pi/ N. And to get i-th angle you multiply this value by i: i * 2 * pi / N.
math : P=pr^2=p*r*r= p*r*2 programming i*2*pi/sides
together : i = p i*2, *2=r^2 this should help you
I'm essentially trying to mimic the way the camera rotates in Maya. The arcball in Maya is always aligned with the with the y-axis. So no matter where the up-vector is pointing, it's still rotated or registered with it's up-vector along the y-axis.
I've been able to implement is arcball in OpenGL using C++ and Qt. But I can't figure out how to keep it's up-vector aligned. I've been able to keep it aligned at times by my code below:
void ArcCamera::setPos (Vector3 np)
{
Vector3 up(0, 1, 0);
Position = np;
ViewDir = (ViewPoint - Position); ViewDir.normalize();
RightVector = ViewDir ^ up; RightVector.normalize();
UpVector = RightVector ^ ViewDir; UpVector.normalize();
}
This works up until the position is at 90-degrees, then the right vector changes and everything is inverted.
So instead I've been maintaining the total rotation (in quaternions) and rotating the original positions (up, right, pos) by it. This works best to keep everything coherent, but now I simply can't align the up-vector to the y-axis. Below is the function for the rotation.
void CCamera::setRot (QQuaternion q)
{
tot = tot * q;
Position = tot.rotatedVector(PositionOriginal);
UpVector = tot.rotatedVector(UpVectorOriginal);
UpVector.normalize();
RightVector = tot.rotatedVector(RightVectorOriginal);
RightVector.normalize();
}
The QQuaternion q is generated from the axis-angle pair derived from the mouse drag. I'm confident this is done correctly. The rotation itself is fine, it just doesn't keep the orientation aligned.
I've noticed in my chosen implementation, dragging in the corners provides a rotation around my view direction, and I can always realign the up-vector to straighten out to the world's y-axis direction. So If I could figure out how much to roll I could probably do two rotations each time to make sure it's all straight. However, I'm not sure how to go about this.
The reason this isn't working is because Maya's camera manipulation in the viewport does not use an arcball interface. What you want to do is Maya's tumble command. The best resource I've found for explaining this is this document from Professor Orr's Computer Graphics class.
Moving the mouse left and right corresponds to the azimuth angle, and specifies a rotation around the world space Y axis. Moving the mouse up and down corresponds to the elevation angle, and specifies a rotation around the view space X axis. The goal is to generate the new world-to-view matrix, then extract the new camera orientation and eye position from that matrix, based on however you've parameterized your camera.
Start with the current world-to-view matrix. Next, we need to define the pivot point in world space. Any pivot point will work to begin with, and it can be simplest to use the world origin.
Recall that pure rotation matrices generate rotations centered around the origin. This means that to rotate around an arbitrary pivot point, you first translate to the origin, perform the rotation, and translate back. Remember also that transformation composition happens from right to left, so the negative translation to get to the origin goes on the far right:
translate(pivotPosition) * rotate(angleX, angleY, angleZ) * translate(-pivotPosition)
We can use this to calculate the azimuth rotation component, which is a rotation around the world Y axis:
azimuthRotation = translate(pivotPosition) * rotateY(angleY) * translate(-pivotPosition)
We have to do a little additional work for the elevation rotation component, because it happens in view space, around the view space X axis:
elevationRotation = translate(worldToViewMatrix * pivotPosition) * rotateX(angleX) * translate(worldToViewMatrix * -pivotPosition)
We can then get the new view matrix with:
newWorldToViewMatrix = elevationRotation * worldToViewMatrix * azimuthRotation
Now that we have the new worldToView matrix, we're left with having to extract the new world space position and orientation from the view matrix. To do this, we want the viewToWorld matrix, which is the inverse of the worldToView matrix.
newOrientation = transpose(mat3(newWorldToViewMatrix))
newPosition = -((newOrientation * newWorldToViewMatrix).column(3))
At this point, we have the elements separated. If your camera is parameterized so that you're only storing a quaternion for your orientation, you just need to do the rotation matrix -> quaternion conversion. Of course, Maya is going to convert to Euler angles for display in the channel box, which will be dependent on the camera's rotation order (note that the math for tumbling doesn't change when the rotation order changes, just the way that the rotation matrix -> Euler angles conversion is done).
Here's a sample implementation in Python:
#!/usr/bin/env python
import numpy as np
from math import *
def translate(amount):
'Make a translation matrix, to move by `amount`'
t = np.matrix(np.eye(4))
t[3] = amount.T
t[3, 3] = 1
return t.T
def rotateX(amount):
'Make a rotation matrix, that rotates around the X axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[1, 0, 0, 0],
[0, c,-s, 0],
[0, s, c, 0],
[0, 0, 0, 1],
])
def rotateY(amount):
'Make a rotation matrix, that rotates around the Y axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[c, 0, s, 0],
[0, 1, 0, 0],
[-s, 0, c, 0],
[0, 0, 0, 1],
])
def rotateZ(amount):
'Make a rotation matrix, that rotates around the Z axis by `amount` rads'
c = cos(amount)
s = sin(amount)
return np.matrix([
[c,-s, 0, 0],
[s, c, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
])
def rotate(x, y, z, pivot):
'Make a XYZ rotation matrix, with `pivot` as the center of the rotation'
m = rotateX(x) * rotateY(y) * rotateZ(z)
I = np.matrix(np.eye(4))
t = (I-m) * pivot
m[0, 3] = t[0, 0]
m[1, 3] = t[1, 0]
m[2, 3] = t[2, 0]
return m
def eulerAnglesZYX(matrix):
'Extract the Euler angles from an ZYX rotation matrix'
x = atan2(-matrix[1, 2], matrix[2, 2])
cy = sqrt(1 - matrix[0, 2]**2)
y = atan2(matrix[0, 2], cy)
sx = sin(x)
cx = cos(x)
sz = cx * matrix[1, 0] + sx * matrix[2, 0]
cz = cx * matrix[1, 1] + sx * matrix[2, 1]
z = atan2(sz, cz)
return np.array((x, y, z),)
def eulerAnglesXYZ(matrix):
'Extract the Euler angles from an XYZ rotation matrix'
z = atan2(matrix[1, 0], matrix[0, 0])
cy = sqrt(1 - matrix[2, 0]**2)
y = atan2(-matrix[2, 0], cy)
sz = sin(z)
cz = cos(z)
sx = sz * matrix[0, 2] - cz * matrix[1, 2]
cx = cz * matrix[1, 1] - sz * matrix[0, 1]
x = atan2(sx, cx)
return np.array((x, y, z),)
class Camera(object):
def __init__(self, worldPos, rx, ry, rz, coi):
# Initialize the camera orientation. In this case the original
# orientation is built from XYZ Euler angles. orientation is the top
# 3x3 XYZ rotation matrix for the view-to-world matrix, and can more
# easily be thought of as the world space orientation.
self.orientation = \
(rotateZ(rz) * rotateY(ry) * rotateX(rx))
# position is a point in world space for the camera.
self.position = worldPos
# Construct the world-to-view matrix, which is the inverse of the
# view-to-world matrix.
self.view = self.orientation.T * translate(-self.position)
# coi is the "center of interest". It defines a point that is coi
# units in front of the camera, which is the pivot for the tumble
# operation.
self.coi = coi
def tumble(self, azimuth, elevation):
'''Tumble the camera around the center of interest.
Azimuth is the number of radians to rotate around the world-space Y axis.
Elevation is the number of radians to rotate around the view-space X axis.
'''
# Find the world space pivot point. This is the view position in world
# space minus the view direction vector scaled by the center of
# interest distance.
pivotPos = self.position - (self.coi * self.orientation.T[2]).T
# Construct the azimuth and elevation transformation matrices
azimuthMatrix = rotate(0, -azimuth, 0, pivotPos)
elevationMatrix = rotate(elevation, 0, 0, self.view * pivotPos)
# Get the new view matrix
self.view = elevationMatrix * self.view * azimuthMatrix
# Extract the orientation from the new view matrix
self.orientation = np.matrix(self.view).T
self.orientation.T[3] = [0, 0, 0, 1]
# Now extract the new view position
negEye = self.orientation * self.view
self.position = -(negEye.T[3]).T
self.position[3, 0] = 1
np.set_printoptions(precision=3)
pos = np.matrix([[5.321, 5.866, 4.383, 1]]).T
orientation = radians(-60), radians(40), 0
coi = 1
camera = Camera(pos, *orientation, coi=coi)
print 'Initial attributes:'
print np.round(np.degrees(eulerAnglesXYZ(camera.orientation)), 3)
print np.round(camera.position, 3)
print 'Attributes after tumbling:'
camera.tumble(azimuth=radians(-40), elevation=radians(-60))
print np.round(np.degrees(eulerAnglesXYZ(camera.orientation)), 3)
print np.round(camera.position, 3)
Keep track of you view and right vectors, from the beginning and update them with the rotation matrix. Then calculate your up vector.
I have a class tetronimo (a tetris block) that has four QRect types (named first, second, third, fourth respectively). I draw each tetronimo using a build_tetronimo_L type functions.
These build the tetronimo in a certain direction, but as in tetris you're supposed to be able to rotate the tetronimo's, I'm trying to rotate a tetronimo by rotating each individual square of the tetronimo.
I have found the following formula to apply to each (x, y) coordinate of a particular square.
newx = cos(angle) * oldx - sin(angle) * oldy
newy = sin(angle) * oldx + cos(angle) * oldy
Now, the QRect type of Qt, does only seem to have a setCoords function that takes the (x, y) coordinates of top-left and bottom-right points of the respective square.
I have here an example (which doesn't seem to produce the correct result) of rotating the first two squares in my tetronimo.
Can anyone tell me how I'm supposed to rotate these squares correctly, using runtime rotation calculation?
void tetromino::rotate(double angle) // angle in degrees
{
std::map<std::string, rect_coords> coords = get_coordinates();
// FIRST SQUARE
rect_coords first_coords = coords["first"];
//top left x and y
int newx_first_tl = (cos(to_radians(angle)) * first_coords.top_left_x) - (sin(to_radians(angle)) * first_coords.top_left_y);
int newy_first_tl = (sin(to_radians(angle)) * first_coords.top_left_x) + (cos(to_radians(angle)) * first_coords.top_left_y);
//bottom right x and y
int newx_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) - (sin(to_radians(angle)) * first_coords.bottom_right_y);
int newy_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) + (sin(to_radians(angle)) * first_coords.bottom_right_y);
//CHANGE COORDINATES
first->setCoords( newx_first_tl, newy_first_tl, newx_first_tl + tetro_size,newy_first_tl - tetro_size);
//SECOND SQUARE
rect_coords second_coords = coords["second"];
int newx_second_tl = (cos(to_radians(angle)) * second_coords.top_left_x) - (sin(to_radians(angle)) * second_coords.top_left_y);
int newy_second_tl = (sin(to_radians(angle)) * second_coords.top_left_x) + (cos(to_radians(angle)) * second_coords.top_left_y);
//CHANGE COORDINATES
second->setCoords(newx_second_tl, newy_second_tl, newx_second_tl - tetro_size, newy_second_tl + tetro_size);
first and second are QRect types. rect_coords is just a struct with four ints in it, that store the coordinates of the squares.
The first square and second square calculations are different, as I was playing around trying to figure it out.
I hope someone can help me figure this out?
(Yes, I can do this much simpler, but I'm trying to learn from this)
It seems more like a math question than a programming question. Just plug in values like 90 degrees for the angle to figure this out. For 90 degrees, a point (x,y) is mapped to (-y, x). You probably don't want to rotate around the origin but around a certain pivot point c.x, c.y. For that you need to translate first, then rotate, then translate back:
(x,y) := (x-c.x, y-c.y) // translate into coo system w/ origin at c
(x,y) := (-y, x) // rotate
(x,y) := (x+c.x, y+c.y) // translate into original coo system
Before rotating you have to translate so that the piece is centered in the origin:
Translate your block centering it to 0, 0
Rotate the block
Translate again the center of the block to x, y
If you rotate without translating you will rotate always around 0, 0 but since the block is not centered it will be rotated around the center. To center your block is quite simple:
For each point, compute the median of X and Y, let's call it m
Subtract m.X and m.Y to the coordinates of all points
Rotate
Add again m.X and m.Y to points.
Of course you can use linear algebra and vector * matrix multiplication but maybe it is too much :)
Translation
Let's say we have a segment with coordinates A(3,5) B(10,15).
If you want to rotate it around its center, we first translate it to our origin. Let's compute mx and my:
mx = (10 - 3) / 2
my = (15 - 5) / 2
Now we compute points A1 and B1 translating the segment so it is centered to the origin:
A1(A.X - mx, A.Y - my)
B1(B.X - mx, B.Y - my)
Now we can perform our rotation of A1 and B1 (you know how).
Then we have to translate again to the original position:
A = (rotatedA1.X + mx, rotatedA1.y + my)
B = (rotatedB1.X + mx, rotatedB1.y + my)
If instead of having two points you have n points you have of course do everything for n points.
You could use Qt Graphics View which does all the geometric calculations for you.
Or are you just wanting to learn basic linear geometrical transformations? Then reading a math textbook would probably be more appropriate than coding.