rotate an image object along with the pointer in C - c++

I have a C application where i have loaded my image(gif) object onto the screen. Now i wish the Image object to rotate on one axis along with my pointer.
Means wherever i move the pointer on the screen, my image should rotate from a fixed point...How do i do that?
I have seen formulae like
newx = cos(angle) * oldx - sin(angle) * oldy
newy = sin(angle) * oldx + cos(angle) * oldy
but it inputs angle also..but i dont have the angles... i have pointer coordinates... How do i make the object move according to the mouse pointer?

Seriously... You have learnt trigonometry in secondary school, right?
angle = arctan((pointerY - centerY) / (pointerX - centerX))
in C:
// obtain pointerX and pointerY; calculate centerX as width of the image / 2,
// centerY as heigth of the image / 2
double angle = atan2(pointerY - centerY, pointerX - centerX);
double newX = cos(angle) * oldX - sin(angle) * oldY
double newY = sin(angle) * oldX + cos(angle) * oldY

First of all, that formula is perfectly fine if your rotation is in 2D space. You cannot remove angle from your formula because rotation without an angle is meaningless!! Think about it.
What you really need is to learn more basic stuff before doing what you are trying to do. For example, you should learn about:
How to get the mouse location from your window management system (for example SDL)
How to find an angle based on the mouse location
How to draw quads with texture on them (For example using OpenGL)
How to perform transformation, either manually or for example using OpenGL itself
Update
If you have no choice but to draw straight rectangles, you need to rotate the image manually, creating a new image. This link contains all the keywords you need to lookup for doing that. However in short, it goes something like this:
for every point (dr,dc) in destination image
find inverse transform of (dr,dc) in original image, named (or, oc)
// Note that most probably or and oc are fractional numbers
from the colors of:
- (floor(or), floor(oc))
- (floor(or), ceil(oc))
- (ceil(or), floor(oc))
- (ceil(or), ceil(oc))
using bilinear interpolation, computing a color (r,g,b)
dest_image[dr][dc] = (r,g,b)

the angle you calculate between where the user clicks on the screen and the old coordinates.
e.g.
on screen you have a square
( 0,10)-----(10,10)
| |
| |
| |
( 0, 0)-----(10, 0)
and if the user clicks in say (15,5)
you can for example calculate the angle relative your square from either a corner or from the cross section of the square then just use the formulas that you already have for each coordinate of the square.

Related

Computing a character turn angle given old and new position - OpenGL

I am working on a game project using OpenGl. I am building a game from skeleton code I found online. I have a character that can move around in a 2D plane. (x and z, ie you are viewing the character from above.) I am currently stuck on making him rotate as he moves, and I can't seem to find a solution online that solves my problem.
At the moment when the character is being drawn he faces a certain way (along the arrow in my diagram below.). I can rotate him an arbitrary number of degrees from his default direction using glm::rotate.
I have updated the code to log the characters position from a frame ago when he moves, so I have this information:
character old position (known)-> O
character starting angle (unknown)-> |\
| \
| \
|(X)\
| \
V O <- character new position (known)
How do I compute the angle (X)? Is it possible with the information I have?
I have been doodling on a page trying to figure this out for the last hour but can't seem to figure it out. Thanks very much.
Yes. This answer gives you an example of how to do it: How to calculate the angle between a line and the horizontal axis?
Note however that that will give you the angle between the horizontal axsis and the point. You can however just add 90 degrees.
What you're doing sounds somewhat convoluted. From the description, it seems like you want a rotation matrix that matches the direction. There's really no need to calculate an angle. You can build the rotation matrix directly, which is easier and more efficient.
I'll illustrate the calculations with points/vectors in the xy-plane, since that's much more standard. It sounds like you're operating in the xz-plane. While that doesn't change things much, you might need slight changes because you have a left-handed coordinate system.
If you have the direction vector (difference between new position and old position), all you need to do is normalize it, and you already have what's needed for the rotation matrix. I'll write the calculation explicitly, but your matrix/vector library most likely has a method to normalize a vector.
float vx = nexPosX - oldPosX;
float vy = newPosY - oldPosY;
float s = 1.0f / sqrt(vx * vx + vy * vy);
vx *= s;
vy *= s;
vx is now the cosine of the rotation angle, and vy the sine of the rotation angle. Substituting this into the standard form of a rotation matrix, you get:
R = ( cos(phi) -sin(phi) ) = ( vx -vy )
( sin(phi) cos(phi) ) ( vy vx )
This is the absolute rotation for the new direction. If you need the relative rotation between old direction and new direction, it just takes a few more operations. Let's say you already calculated the normalized vectors for the old and new directions as (v1x, v1y) and (v2x, v2y). The cosine of the rotation angle is the scalar product of the two vectors:
cosPhi = v1x * v2x + v1y * v2y;
and the sine is the length of the cross product. Since both vectors are in the xy-plane, that's simply the z-component of the cross product:
sinPhi = v1x * v2y - v1y * v2x;
With these two values, you can directly build the rotation matrix again:
R = ( cosPhi -sinPhi )
( sinPhi cosPhi )

From window to opengl coordinate system intuitive explanation

I am trying to understand the map from window coordinate axis (origin top-left) to OpenGL coordinate axis (origin left-bottom) when using the mouse function. In relevant book this map is described by the two following lines:
points[count].x = (float) x / (w/2) - 1.0;
points[count].y = (float) (h-y) / (h/2) - 1.0;
I suspect that these two lines depict a scale. Could you please give an intuitive-mathematical explanation of this map?
What book are you referring to? The origin in NDC-space is the center of the viewport (0,0 is the center; -1,-1 is the bottom-left; 1,1 is the top-right). Any other coordinate space is defined by your projection matrix.
I believe what the book is trying to teach you is that NDC-1,-1 is the bottom-left corner of your viewport and NDC1,1 is the top-right corner.
A more complete mapping would include the X and Y location of your viewport:
NDCX = (2.0 * (ScreenX - ViewportX) / ViewportW) - 1.0;
NDCY = (2.0 * (ScreenY - ViewportY) / ViewportH) - 1.0;
This mapping is illustrated below (the square on the right is the viewport):
  
You of course have one additional step necessary since the Y-axis runs the opposite direction in your mouse coordinate system. That is why you see the Y-axis flipped in your mapping h-y

Rotate tetris blocks at runtime

I have a class tetronimo (a tetris block) that has four QRect types (named first, second, third, fourth respectively). I draw each tetronimo using a build_tetronimo_L type functions.
These build the tetronimo in a certain direction, but as in tetris you're supposed to be able to rotate the tetronimo's, I'm trying to rotate a tetronimo by rotating each individual square of the tetronimo.
I have found the following formula to apply to each (x, y) coordinate of a particular square.
newx = cos(angle) * oldx - sin(angle) * oldy
newy = sin(angle) * oldx + cos(angle) * oldy
Now, the QRect type of Qt, does only seem to have a setCoords function that takes the (x, y) coordinates of top-left and bottom-right points of the respective square.
I have here an example (which doesn't seem to produce the correct result) of rotating the first two squares in my tetronimo.
Can anyone tell me how I'm supposed to rotate these squares correctly, using runtime rotation calculation?
void tetromino::rotate(double angle) // angle in degrees
{
std::map<std::string, rect_coords> coords = get_coordinates();
// FIRST SQUARE
rect_coords first_coords = coords["first"];
//top left x and y
int newx_first_tl = (cos(to_radians(angle)) * first_coords.top_left_x) - (sin(to_radians(angle)) * first_coords.top_left_y);
int newy_first_tl = (sin(to_radians(angle)) * first_coords.top_left_x) + (cos(to_radians(angle)) * first_coords.top_left_y);
//bottom right x and y
int newx_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) - (sin(to_radians(angle)) * first_coords.bottom_right_y);
int newy_first_bl = (cos(to_radians(angle)) * first_coords.bottom_right_x) + (sin(to_radians(angle)) * first_coords.bottom_right_y);
//CHANGE COORDINATES
first->setCoords( newx_first_tl, newy_first_tl, newx_first_tl + tetro_size,newy_first_tl - tetro_size);
//SECOND SQUARE
rect_coords second_coords = coords["second"];
int newx_second_tl = (cos(to_radians(angle)) * second_coords.top_left_x) - (sin(to_radians(angle)) * second_coords.top_left_y);
int newy_second_tl = (sin(to_radians(angle)) * second_coords.top_left_x) + (cos(to_radians(angle)) * second_coords.top_left_y);
//CHANGE COORDINATES
second->setCoords(newx_second_tl, newy_second_tl, newx_second_tl - tetro_size, newy_second_tl + tetro_size);
first and second are QRect types. rect_coords is just a struct with four ints in it, that store the coordinates of the squares.
The first square and second square calculations are different, as I was playing around trying to figure it out.
I hope someone can help me figure this out?
(Yes, I can do this much simpler, but I'm trying to learn from this)
It seems more like a math question than a programming question. Just plug in values like 90 degrees for the angle to figure this out. For 90 degrees, a point (x,y) is mapped to (-y, x). You probably don't want to rotate around the origin but around a certain pivot point c.x, c.y. For that you need to translate first, then rotate, then translate back:
(x,y) := (x-c.x, y-c.y) // translate into coo system w/ origin at c
(x,y) := (-y, x) // rotate
(x,y) := (x+c.x, y+c.y) // translate into original coo system
Before rotating you have to translate so that the piece is centered in the origin:
Translate your block centering it to 0, 0
Rotate the block
Translate again the center of the block to x, y
If you rotate without translating you will rotate always around 0, 0 but since the block is not centered it will be rotated around the center. To center your block is quite simple:
For each point, compute the median of X and Y, let's call it m
Subtract m.X and m.Y to the coordinates of all points
Rotate
Add again m.X and m.Y to points.
Of course you can use linear algebra and vector * matrix multiplication but maybe it is too much :)
Translation
Let's say we have a segment with coordinates A(3,5) B(10,15).
If you want to rotate it around its center, we first translate it to our origin. Let's compute mx and my:
mx = (10 - 3) / 2
my = (15 - 5) / 2
Now we compute points A1 and B1 translating the segment so it is centered to the origin:
A1(A.X - mx, A.Y - my)
B1(B.X - mx, B.Y - my)
Now we can perform our rotation of A1 and B1 (you know how).
Then we have to translate again to the original position:
A = (rotatedA1.X + mx, rotatedA1.y + my)
B = (rotatedB1.X + mx, rotatedB1.y + my)
If instead of having two points you have n points you have of course do everything for n points.
You could use Qt Graphics View which does all the geometric calculations for you.
Or are you just wanting to learn basic linear geometrical transformations? Then reading a math textbook would probably be more appropriate than coding.

Screen Projection and Culling united

I am currently dealing with several thousand boxes that i'd like to project onto the screen to determinate their sizes and distances to the camera.
My current approach is to get a sphere representing the box and project that using view and projection matrices and the viewport values.
// PSEUDOCODE
// project box center from world into viewspace
boxCenterInViewSpace = viewMatrix * boxCenter;
// get two points left and right of center
leftPoint = boxCenter - radius;
right = boxCenter + radius;
// project points from view into eye space
leftPoint = projectionMatrix * leftPoint;
rightPoint = projectionMatrix * rightPoint;
// normalize points
leftPoint /= leftPoint.w;
rightPoint /= rightPoint.w;
// move to 0..1 range
leftPoint = leftPoint * 0.5 + 0.5;
rightPoint = rightPoint * 0.5 + 0.5;
// scale to viewport
leftPoint.x = leftPoint.x * viewPort.right + viewPort.left;
leftPoint.y = leftPoint.y * viewPort.bottom + viewPort.top;
rightPoint.x = rightPoint.x * viewPort.right + viewPort.left;
rightPoint.y = rightPoint.y * viewPort.bottom + viewPort.top;
// at this point i check if the node is visible on screen by comparing the points to the viewport
// calculate size
length(rightPoint - leftPoint)
At another point i calculate the distance of the box to the camera.
The first problem is that i won't know if the box is just below the viewport as i just calculate horizontal. Is there a way to project a real sphere onto the screen somehow? Some method that looks like:
float getSizeOfSphereProjectedOnScreen(vec3 midpoint, float radius)
The other question is simpler: In with coordinate space is the z coordinate corresponding to the distance to the camera?
To sum it up i want to calculate:
Is the Box in the view frustum?
What is the size of the Box on the screen?
What is the distance from Box to camera?
To simplify calculations i'd like to use a sphere representation for this but i don't know how to project a sphere.
[Updated]
What is the distance from Box to camera?
In
[which] coordinate space is the z
coordinate corresponding to the
distance to the camera?
The answer is none of the usual spaces. The closest one would be in view space (i.e. after you apply the view matrix but not the projection matrix). In view space, the distance to the camera should be sqrt(x*x + y*y + z*z), because the camera is at the origin. (z would be a reasonable approximation only if |x| and |y| were really small relative to |z|.) This is assuming that knowing the distance from the camera to the center of the box is good enough.
I think if you really wanted a space in which the z coordinate corresponds to the distance to the camera, you'd need to map a spherical locus of points sqrt(x*x + y*y + z*z) = d to a plane z = d. I don't know that you can do that with a matrix.
Is the Box in the view frustum?
What is the size of the Box on the screen?
I think you're on the right track with this, but depending on which direction the camera is facing, your left and right points might not determine how wide the box looks or whether the box intersects the view frustum. See my answer to your other question for a long way to do this.

Draw object in opengl depending on x,y coordinate given by various input devices

I have multiple input devices and I want to create a cursor for each one. I'm given x and y coordinates, and I want to draw it on the screen.
How do I calculate the x,y when using glTranslatef?
I'm pretty sure, unless im suffering from a major mind failure, it goes as follows:
float fX = ((float)(x * 2) / (float)screenWidth) - 1.0f)
float fY = ((float)(-y * 2) / (float)screenHeight) - 1.0f)