I'm primarily a Flash AS3 dev, but I'm jumping into openframeworks and having trouble using 3D (these examples are in AS)
In 2D you can simulate an object orbiting a point by using Math.Sin() and Math.cos(), like so
function update(event:Event):void
{
dot.x = xCenter + Math.cos(angle*Math.PI/180) * range;
dot.y = yCenter + Math.sin(angle*Math.PI/180) * range;
angle+=speed;
}
I am wondering how I would translate this into a 3D orbit, if I wanted to also orbit in the third dimension.
function update(event:Event):void
{
...
dot.z = zCenter + Math.sin(angle*Math.PI/180) * range;
// is this valid?
}
An help is greatly appreciated.
If you are orbiting around the z-axis, you are leaving your z-coordinate fixed and changing your x- and y-coordinates. So your first code sample is what you are looking for.
To rotate around the x-axis (or y-axes), just replace x (or y) with z. Use Cos on whichever axis you want to be 0-degrees; the choice is arbitrary.
If what you actually want is to orbit an object around a point in 3d-space, you'll need two angles to describe the orbit: its elevation angle and its inclination angle. See here and here.
For reference, those equations are (where θ and φ are your angles)
x = x0 + r sin(θ) cos(φ)
y = y0 + r sin(θ) sin(φ)
z = z0 + r cos(θ)
If you are orbiting around Z axis, then you just do your first code, and leave Z coordinate as is.
I would pick two unit perpendicular vectors v, w that define the plane in which to orbit, then loop over the angle and pick the proper ratio of these vectors v and w to build your vector p = av + bw.
More details are coming.
EDIT:
This might be of help
http://en.wikipedia.org/wiki/Orbit_equation
EDIT: I think it is actually
center + sin(angle) * v * radius1 + cos(angle) * w * radius2
Here v and w are your unit vectors for the circle.
In 2D they were (1,0) and (0,1).
In 3D you will need to compute them - depends on orientation of the plane.
If you set radius1 = radius 2, you will get a circle. Otherwise, you should get an ellipse.
If you just want the orbit to happen at an angled plane and don't mind it being elliptic you can just do something like z = 0.2*x + 0.2*y, or any combination you fancy, after you have determined the x and y coordinates.
Related
I have a 3D scene with an infinite horizontal plane (parallel to the xz coordinates) at a height H along the Y vertical axis.
I would like to know how to determine the intersection between the axis of my camera and this plane.
The camera is defined by a view-matrix and a projection-matrix.
There are two sub-problems here: 1) Extracting the position and view-direction from the camera matrix. 2) Calculating the intersection between the view-ray and the plane.
Extracting position and view-direction
The view matrix describes how points are transformed from world-space to view space. The view-space in OpenGL is usually defined such that the camera is in the origin and looks into the -z direction.
To get the position of the camera, we have to transform the origin [0,0,0] of the view-space back into world-space. Mathematically speaking, we have to calculate:
camera_pos_ws = inverse(view_matrix) * [0,0,0,1]
but when looking at the equation we'll see that we are only interrested in the 4th column of the inverse matrix which will contain 1
camera_pos_ws = [-view_matrix[12], -view_matrix[13], -view_matrix[14]]
The orientation of the camera can be found by a similar calculation. We know that the camera looks in -z direction in view-space thus the world space direction is given by
camera_dir_ws = inverse(view_matrix) * [0,0,-1,0];
Again, when looking at the equation, we'll see that this only takes the third row of the inverse matrix into account which is given by2
camera_dir_ws = [-view_matrix[2], -view_matrix[6], -view_matrix[10]]
Calculating the intersection
We now know the camera position P and the view direction D, thus we have to find the x,z value along the ray R(x,y,z) = P + l * D where y equals H. Since there is only one unknown, l, we can calculate that from
y = Py + l * Dy
H = Py + l * Dy
l = (H - Py) / Dy
The intersection point is then given by pasting l back into the ray equation.
Notes
1 The indices assume that the matrix is stored in a column-major linear array.
2 Note, that the inverse of a matrix of the form
M = [ R T ]
0 1
, where R is a orthogonal 3x3 matrix, is given by
inv(M) = [ transpose(R) -T ]
0 1
For a general line-plane intersection there are lot of answers and tutorials.
Your case is simple due to the plane is horizontal.
I suppose the camera is at C(cx, cy, cz) and it looks at T(tx, ty,tz).
Then the line camera-target can be defined by:
cx - x cy - y cz - z
------ = ------ = ------ /// These are two independant equations
tx - cx ty - cy tz - cz
For a horizontal plane, only a equation is needed: y = H.
Substitute this value in the line equations and you get
(cx-x)/(tx-cx) = (cy-H)/(ty-cy)
(cz-z)/(tz-cz) = (cy-H)/(ty-cy)
So
x = cx - (tx-cx)*(cy-H)/(ty-cy)
y = H
z = cz - (tz-cz)*(cy-H)/(ty-cy)
Of course if your camera looks in an also horizontal line then ty=cy and there is not solution.
Firstly, if you would like an explanation of the GLM lookAt algorithm, please look at the answer provided on this question: https://stackoverflow.com/a/19740748/1525061
mat4x4 lookAt(vec3 const & eye, vec3 const & center, vec3 const & up)
{
vec3 f = normalize(center - eye);
vec3 u = normalize(up);
vec3 s = normalize(cross(f, u));
u = cross(s, f);
mat4x4 Result(1);
Result[0][0] = s.x;
Result[1][0] = s.y;
Result[2][0] = s.z;
Result[0][1] = u.x;
Result[1][1] = u.y;
Result[2][1] = u.z;
Result[0][2] =-f.x;
Result[1][2] =-f.y;
Result[2][2] =-f.z;
Result[3][0] =-dot(s, eye);
Result[3][1] =-dot(u, eye);
Result[3][2] = dot(f, eye);
return Result;
}
Now I'm going to tell you why I seem to be having a conceptual issue with this algorithm. There are two parts to this view matrix, the translation and the rotation. The translation does the correct inverse transformation, bringing the camera position to the origin, instead of the origin position to the camera. Similarly, you expect the rotation that the camera defines to be inversed before being put into this view matrix as well. I can't see that happening here, that's my issue.
Consider the forward vector, this is where your camera looks at. Consequently, this forward vector needs to be mapped to the -Z axis, which is the forward direction used by openGL. The way this view matrix is suppose to work is by creating an orthonormal basis in the columns of the view matrix, so when you multiply a vertex on the right hand side of this matrix, you are essentially just converting it's coordinates to that of different axes.
When I play the rotation that occurs as a result of this transformation in my mind, I see a rotation that is not the inverse rotation of the camera, like what's suppose to happen, rather I see the non-inverse. That is, instead of finding the camera forward being mapped to the -Z axis, I find the -Z axis being mapped to the camera forward.
If you don't understand what I mean, consider a 2D example of the same type of thing that is happening here. Let's say the forward vector is (sqr(2)/2 , sqr(2)/2), or sin/cos of 45 degrees, and let's also say a side vector for this 2D camera is sin/cos of -45 degrees. We want to map this forward vector to (0,1), the positive Y axis. The positive Y axis can be thought of as the analogy to the -Z axis in openGL space. Let's consider a vertex in the same direction as our forward vector, namely (1,1). By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y axis by using a 2x2 matrix that consists of the forward vector in the first column and the side vector in the second column. This is an equivalent calculation of that calculation http://www.wolframalpha.com/input/?i=%28sqr%282%29%2F2+%2C+sqr%282%29%2F2%29++1+%2B+%28sqr%282%29%2F2%2C+-sqr%282%29%2F2+%29+1.
Note that you don't get your (1,1) vertex mapped the positive Y axis like you wanted, instead you have it mapped to the positive X axis. You might also consider what happened to a vertex that was on the positive Y axis if you applied this transformation. Sure enough, it is transformed to the forward vector.
Therefore it seems like something very fishy is going on with the GLM algorithm. However, I doubt this algorithm is incorrect since it is so popular. What am I missing?
Have a look at GLU source code in Mesa: http://cgit.freedesktop.org/mesa/glu/tree/src/libutil/project.c
First in the implementation of gluPerspective, notice the -1 is using the indices [2][3] and the -2 * zNear * zFar / (zFar - zNear) is using [3][2]. This implies that the indexing is [column][row].
Now in the implementation of gluLookAt, the first row is set to side, the next one to up and the final one to -forward. This gives you the rotation matrix which is post-multiplied by the translation that brings the eye to the origin.
GLM seems to be using the same [column][row] indexing (from the code). And the piece you just posted for lookAt is consistent with the more standard gluLookAt (including the translational part). So at least GLM and GLU agree.
Let's then derive the full construction step by step. Noting C the center position and E the eye position.
Move the whole scene to put the eye position at the origin, i.e. apply a translation of -E.
Rotate the scene to align the axes of the camera with the standard (x, y, z) axes.
2.1 Compute a positive orthonormal basis for the camera:
f = normalize(C - E) (pointing towards the center)
s = normalize(f x u) (pointing to the right side of the eye)
u = s x f (pointing up)
with this, (s, u, -f) is a positive orthonormal basis for the camera.
2.2 Find the rotation matrix R that aligns maps the (s, u, -f) axes to the standard ones (x, y, z). The inverse rotation matrix R^-1 does the opposite and aligns the standard axes to the camera ones, which by definition means that:
(sx ux -fx)
R^-1 = (sy uy -fy)
(sz uz -fz)
Since R^-1 = R^T, we have:
( sx sy sz)
R = ( ux uy uz)
(-fx -fy -fz)
Combine the translation with the rotation. A point M is mapped by the "look at" transform to R (M - E) = R M - R E = R M + t. So the final 4x4 transform matrix for "look at" is indeed:
( sx sy sz tx ) ( sx sy sz -s.E )
L = ( ux uy uz ty ) = ( ux uy uz -u.E )
(-fx -fy -fz tz ) (-fx -fy -fz f.E )
( 0 0 0 1 ) ( 0 0 0 1 )
So when you write:
That is, instead of finding the camera forward being mapped to the -Z
axis, I find the -Z axis being mapped to the camera forward.
it is very surprising, because by construction, the "look at" transform maps the camera forward axis to the -z axis. This "look at" transform should be thought as moving the whole scene to align the camera with the standard origin/axes, it's really what it does.
Using your 2D example:
By using the logic of GLM.lookAt, we should be able to map (1,1) to the Y
axis by using a 2x2 matrix that consists of the forward vector in the
first column and the side vector in the second column.
That's the opposite, following the construction I described, you need a 2x2 matrix with the forward and row vector as rows and not columns to map (1, 1) and the other vector to the y and x axes. To use the definition of the matrix coefficients, you need to have the images of the standard basis vectors by your transform. This gives directly the columns of the matrix. But since what you are looking for is the opposite (mapping your vectors to the standard basis vectors), you have to invert the transformation (transpose, since it's a rotation). And your reference vectors then become rows and not columns.
These guys might give some further insights to your fishy issue:
glm::lookAt vertical camera flips when z <= 0
The answer might be of interest to you?
The problem is I have two points in 3D space where y+ is up, x+ is to the right, and z+ is towards you. I want to orientate a cylinder between them that is the length of of the distance between both points, so that both its center ends touch the two points. I got the cylinder to translate to the location at the center of the two points, and I need help coming up with a rotation matrix to apply to the cylinder, so that it is orientated the correct way. My transformation matrix for the entire thing looks like this:
translate(center point) * rotateX(some X degrees) * rotateZ(some Z degrees)
The translation is applied last, that way I can get it to the correct orientation before I translate it.
Here is what I have so far for this:
mat4 getTransformation(vec3 point, vec3 parent)
{
float deltaX = point.x - parent.x;
float deltaY = point.y - parent.y;
float deltaZ = point.z - parent.z;
float yRotation = atan2f(deltaZ, deltaX) * (180.0 / M_PI);
float xRotation = atan2f(deltaZ, deltaY) * (180.0 / M_PI);
float zRotation = atan2f(deltaX, deltaY) * (-180.0 / M_PI);
if(point.y < parent.y)
{
zRotation = atan2f(deltaX, deltaY) * (180.0 / M_PI);
}
vec3 center = vec3((point.x + parent.x)/2.0, (point.y + parent.y)/2.0, (point.z + parent.z)/2.0);
mat4 translation = Translate(center);
return translation * RotateX(xRotation) * RotateZ(zRotation) * Scale(radius, 1, radius) * Scale(0.1, 0.1, 0.1);
}
I tried a solution given down below, but it did not seem to work at all
mat4 getTransformation(vec3 parent, vec3 point)
{
// moves base of cylinder to origin and gives it unit scaling
mat4 scaleFactor = Translate(0, 0.5, 0) * Scale(radius/2.0, 1/2.0, radius/2.0) * cylinderModel;
float length = sqrtf(pow((point.x - parent.x), 2) + pow((point.y - parent.y), 2) + pow((point.z - parent.z), 2));
vec3 direction = normalize(point - parent);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return Translate(parent) * Scale(length, length, length) * RotateX(pitch) * RotateY(yaw) * scaleFactor;
}
After running the above code I get this:
Every black point is a point with its parent being the point that spawned it (the one before it) I want the branches to fit into the points. Basically I am trying to implement the space colonization algorithm for random tree generation. I got most of it, but I want to map the branches to it so it looks good. I can use GL_LINES just to make a generic connection, but if I get this working it will look so much prettier. The algorithm is explained here.
Here is an image of what I am trying to do (pardon my paint skills)
Well, there's an arbitrary number of rotation matrices satisfying your constraints. But any will do. Instead of trying to figure out a specific rotation, we're just going to write down the matrix directly. Say your cylinder, when no transformation is applied, has its axis along the Z axis. So you have to transform the local space Z axis toward the direction between those two points. I.e. z_t = normalize(p_1 - p_2), where normalize(a) = a / length(a).
Now we just need to make this a full 3 dimensional coordinate base. We start with an arbitrary vector that's not parallel to z_t. Say, one of (1,0,0) or (0,1,0) or (0,0,1); use the scalar product ·(also called inner, or dot product) with z_t and use the vector for which the absolute value is the smallest, let's call this vector u.
In pseudocode:
# Start with (1,0,0)
mindotabs = abs( z_t · (1,0,0) )
minvec = (1,0,0)
for u_ in (0,1,0), (0,0,1):
dotabs = z_t · u_
if dotabs < mindotabs:
mindotabs = dotabs
minvec = u_
u = minvec_
Then you orthogonalize that vector yielding a local y transformation y_t = normalize(u - z_t · u).
Finally create the x transformation by taking the cross product x_t = z_t × y_t
To move the cylinder into place you combine that with a matching translation matrix.
Transformation matrices are effectively just the axes of the space you're "coming from" written down as if seen from the other space. So the resulting matrix, which is the rotation matrix you're looking for is simply the vectors x_t, y_t and z_t side by side as a matrix. OpenGL uses so called homogenuous matrices, so you have to pad it to a 4×4 form using a 0,0,0,1 bottommost row and rightmost column.
That you can load then into OpenGL; if using fixed functio using glMultMatrix to apply the rotation, or if using shader to multiply onto the matrix you're eventually pass to glUniform.
Begin with a unit length cylinder which has one of its ends, which I call C1, at the origin (note that your image indicates that your cylinder has its center at the origin, but you can easily transform that to what I begin with). The other end, which I call C2, is then at (0,1,0).
I'd like to call your two points in world coordinates P1 and P2 and we want to locate C1 on P1 and C2 to P2.
Start with translating the cylinder by P1, which successfully locates C1 to P1.
Then scale the cylinder by distance(P1, P2), since it originally had length 1.
The remaining rotation can be computed using spherical coordinates. If you're not familiar with this type of coordinate system: it's like GPS coordinates: two angles; one around the pole axis (in your case the world's Y-axis) which we typically call yaw, the other one is a pitch angle (in your case the X axis in model space). These two angles can be computed by converting P2-P1 (i.e. the local offset of P2 with respect to P1) into spherical coordinates. First rotate the object with the pitch angle around X, then with yaw around Y.
Something like this will do it (pseudo-code):
Matrix getTransformation(Point P1, Point P2) {
float length = distance(P1, P2);
Point direction = normalize(P2 - P1);
float pitch = acos(direction.y);
float yaw = atan2(direction.z, direction.x);
return translate(P1) * scaleY(length) * rotateX(pitch) * rotateY(yaw);
}
Call the axis of the cylinder A. The second rotation (about X) can't change the angle between A and X, so we have to get that angle right with the first rotation (about Z).
Call the destination vector (the one between the two points) B. Take -acos(BX/BY), and that's the angle of the first rotation.
Take B again, ignore the X component, and look at its projection in the (Y, Z) plane. Take acos(BZ/BY), and that's the angle of the second rotation.
I have a function in my program which rotates a point (x_p, y_p, z_p) around another point (x_m, y_m, z_m) by the angles w_nx and w_ny.
The new coordinates are stored in global variables x_n, y_n, and z_n. Rotation around the y-axis (so changing value of w_nx - so that the y - values are not harmed) is working correctly, but as soon as I do a rotation around the x- or z- axis (changing the value of w_ny) the coordinates aren't accurate any more. I commented on the line I think my fault is in, but I can't figure out what's wrong with that code.
void rotate(float x_m, float y_m, float z_m, float x_p, float y_p, float z_p, float w_nx ,float w_ny)
{
float z_b = z_p - z_m;
float x_b = x_p - x_m;
float y_b = y_p - y_m;
float length_ = sqrt((z_b*z_b)+(x_b*x_b)+(y_b*y_b));
float w_bx = asin(z_b/sqrt((x_b*x_b)+(z_b*z_b))) + w_nx;
float w_by = asin(x_b/sqrt((x_b*x_b)+(y_b*y_b))) + w_ny; //<- there must be that fault
x_n = cos(w_bx)*sin(w_by)*length_+x_m;
z_n = sin(w_bx)*sin(w_by)*length_+z_m;
y_n = cos(w_by)*length_+y_m;
}
What the code almost does:
compute difference vector
convert vector into spherical coordinates
add w_nx and wn_y to the inclination and azimuth angle (see link for terminology)
convert modified spherical coordinates back into Cartesian coordinates
There are two problems:
the conversion is not correct, the computation you do is for two inclination vectors (one along the x axis, the other along the y axis)
even if computation were correct, transformation in spherical coordinates is not the same as rotating around two axis
Therefore in this case using matrix and vector math will help:
b = p - m
b = RotationMatrixAroundX(wn_x) * b
b = RotationMatrixAroundY(wn_y) * b
n = m + b
basic rotation matrices.
Try to use vector math. Decide in which order you rotate, first along x, then along y perhaps.
If you rotate along z-axis, [z' = z]
x' = x*cos a - y*sin a;
y' = x*sin a + y*cos a;
The same repeated for y-axis: [y'' = y']
x'' = x'*cos b - z' * sin b;
z'' = x'*sin b + z' * cos b;
Again rotating along x-axis: [x''' = x'']
y''' = y'' * cos c - z'' * sin c
z''' = y'' * sin c + z'' * cos c
And finally the question of rotating around some specific "point":
First, subtract the point from the coordinates, then apply the rotations and finally add the point back to the result.
The problem, as far as I see, is a close relative to "gimbal lock". The angle w_ny can't be measured relative to the fixed xyz -coordinate system, but to the coordinate system that is rotated by applying the angle w_nx.
As kakTuZ observed, your code converts point to spherical coordinates. There's nothing inherently wrong with that -- with longitude and latitude, one can reach all the places on Earth. And if one doesn't care about tilting the Earth's equatorial plane relative to its trajectory around the Sun, it's ok with me.
The result of not rotating the next reference axis along the first w_ny is that two points that are 1 km a part of each other at the equator, move closer to each other at the poles and at the latitude of 90 degrees, they touch. Even though the apparent purpose is to keep them 1 km apart where ever they are rotated.
if you want to transform coordinate systems rather than only points you need 3 angles. But you are right - for transforming points 2 angles are enough. For details ask Wikipedia ...
But when you work with opengl you really should use opengl functions like glRotatef. These functions will be calculated on the GPU - not on the CPU as your function. The doc is here.
Like many others have said, you should use glRotatef to rotate it for rendering. For collision handling, you can obtain its world-space position by multiplying its position vector by the OpenGL ModelView matrix on top of the stack at the point of its rendering. Obtain that matrix with glGetFloatv, and then multiply it with either your own vector-matrix multiplication function, or use one of the many ones you can obtain easily online.
But, that would be a pain! Instead, look into using the GL feedback buffer. This buffer will simply store the points where the primitive would have been drawn instead of actually drawing the primitive, and then you can access them from there.
This is a good starting point.
I have a class tetronimo (a tetris block) that has four QRect types (named first, second, third, fourth respectively). I draw each tetronimo using a build_tetronimo_L type functions.
These build the tetronimo in a certain direction, but as in tetris you're supposed to be able to rotate the tetronimo's, I'm trying to rotate a tetronimo by rotating each individual square of the tetronimo.
I have found the following formula to apply to each (x, y) coordinate of a particular square.
newx = cos(angle) * oldx - sin(angle) * oldy
newy = sin(angle) * oldx + cos(angle) * oldy
Now, the QRect type of Qt, does only seem to have a setCoords function that takes the (x, y) coordinates of top-left and bottom-right points of the respective square.
I have here an example (which doesn't seem to produce the correct result) of rotating the first two squares in my tetronimo.
Can anyone tell me how I'm supposed to rotate these squares correctly, using runtime rotation calculation?
void tetromino::rotate(double angle) // angle in degrees
{
std::map<std::string, rect_coords> coords = get_coordinates();
// FIRST SQUARE
rect_coords first_coords = coords["first"];
//top left x and y
int newx_first_tl = (cos(to_radians(angle)) * first_coords.top_left_x) - (sin(to_radians(angle)) * first_coords.top_left_y);
int newy_first_tl = (sin(to_radians(angle)) * first_coords.top_left_x) + (cos(to_radians(angle)) * first_coords.top_left_y);
//bottom right x and y
int newx_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) - (sin(to_radians(angle)) * first_coords.bottom_right_y);
int newy_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) + (sin(to_radians(angle)) * first_coords.bottom_right_y);
//CHANGE COORDINATES
first->setCoords( newx_first_tl, newy_first_tl, newx_first_tl + tetro_size,newy_first_tl - tetro_size);
//SECOND SQUARE
rect_coords second_coords = coords["second"];
int newx_second_tl = (cos(to_radians(angle)) * second_coords.top_left_x) - (sin(to_radians(angle)) * second_coords.top_left_y);
int newy_second_tl = (sin(to_radians(angle)) * second_coords.top_left_x) + (cos(to_radians(angle)) * second_coords.top_left_y);
//CHANGE COORDINATES
second->setCoords(newx_second_tl, newy_second_tl, newx_second_tl - tetro_size, newy_second_tl + tetro_size);
first and second are QRect types. rect_coords is just a struct with four ints in it, that store the coordinates of the squares.
The first square and second square calculations are different, as I was playing around trying to figure it out.
I hope someone can help me figure this out?
(Yes, I can do this much simpler, but I'm trying to learn from this)
It seems more like a math question than a programming question. Just plug in values like 90 degrees for the angle to figure this out. For 90 degrees, a point (x,y) is mapped to (-y, x). You probably don't want to rotate around the origin but around a certain pivot point c.x, c.y. For that you need to translate first, then rotate, then translate back:
(x,y) := (x-c.x, y-c.y) // translate into coo system w/ origin at c
(x,y) := (-y, x) // rotate
(x,y) := (x+c.x, y+c.y) // translate into original coo system
Before rotating you have to translate so that the piece is centered in the origin:
Translate your block centering it to 0, 0
Rotate the block
Translate again the center of the block to x, y
If you rotate without translating you will rotate always around 0, 0 but since the block is not centered it will be rotated around the center. To center your block is quite simple:
For each point, compute the median of X and Y, let's call it m
Subtract m.X and m.Y to the coordinates of all points
Rotate
Add again m.X and m.Y to points.
Of course you can use linear algebra and vector * matrix multiplication but maybe it is too much :)
Translation
Let's say we have a segment with coordinates A(3,5) B(10,15).
If you want to rotate it around its center, we first translate it to our origin. Let's compute mx and my:
mx = (10 - 3) / 2
my = (15 - 5) / 2
Now we compute points A1 and B1 translating the segment so it is centered to the origin:
A1(A.X - mx, A.Y - my)
B1(B.X - mx, B.Y - my)
Now we can perform our rotation of A1 and B1 (you know how).
Then we have to translate again to the original position:
A = (rotatedA1.X + mx, rotatedA1.y + my)
B = (rotatedB1.X + mx, rotatedB1.y + my)
If instead of having two points you have n points you have of course do everything for n points.
You could use Qt Graphics View which does all the geometric calculations for you.
Or are you just wanting to learn basic linear geometrical transformations? Then reading a math textbook would probably be more appropriate than coding.