Rotating an equilateral triangle about the centroid in DirectX - c++

I have a project in which I want to make a game that uses Asteroids-style ships in order to demonstrate AI pathfinding and steering behaviors.
The object stores values for the position the centroid of the triangle (Otherwise the player's position) and the ships' orientation in degrees, then calculates the position of the three vertices like so:
A is the nose of the triangle. When the the orientation is 0 degrees, the ship's nose sits on the X-axis offset by SIZE, which detemines the size of the ship which is equal to the radius of its bounding circle.
Vertices B and C are rotated from the position of A 120 and 240 degrees respecively, then the program draws line segments between each vertex.
When I start the program, this works beautifully, and my bounding circle, orientation vector, and ship look how I expect them to.
However, I implemented a control such that angle is incremented and decremented with the left and right arrow keys. This control has been tested and proven to work as desired. When this happens, the orientation vector adjusts accordingly, and the ship rotates... but only sort of. The points immediately move in to the center of the mass until they collapse on the center and disappear forever, regardless of which direction I rotate in.
//Convert degrees into radians
float Ship::convert(float degrees){
return degrees*(PI/180);
}
//Rotation X-Coord
int Ship::rotateX(int x, int y, float degrees){
return (x*cos(convert(degrees)))-(y*sin(convert(degrees)));
}
//Rotate Y-Coord
int Ship::rotateY(int x, int y, float degrees){
return x*sin(convert(degrees))+y*cos(convert(degrees));
}
//Rotate the Ship
void Ship::rotateShip(float degrees){
vertA [0]=rotateX(vertA [0], vertA [1], degrees);
vertA [1]=rotateY(vertA [0], vertA [1], degrees);
vertB [0]=rotateX(vertB [0], vertB [1], degrees);
vertB [1]=rotateY(vertB [0], vertB [1], degrees);
vertC [0]=rotateX(vertC [0], vertC [1], degrees);
vertC [1]=rotateY(vertC [0], vertC [1], degrees);
}
The rotate functions work fine, but I can't be sure that rotating the Ship is working. It seems that it should- the vert variables are arrays that store the X and Y coord of that vertex. If I rotate each vertex by the orientation angle of the ship, it should in theory spin the triangle on its center, right?
The verts are all calibrated about the origin, and the drawShip() function in the Game object takes those coords and adds the playerX and playerY to translate the fully drawn ship to the position of the player. So essentially, translating it to the origin, rotating it, and translating it back to the position of the player.
If it helps, I will include the relevant functions from the Game Object. In this code, psp is the Ship object. Draw ship takes all the parameters of the ship object and should work, but I'm getting unwanted results.
//Draw Ship
void Game::drawShip1(int shipX, int shipY, int vel, int alpha, int size, int* vertA, int* vertB, int* vertC, int r, int g, int b){
//Draw a bounding circle
drawCircle(shipX, shipY, size, 0, 255, 255);
//Draw the ship by connecting vertices A, B, and C
//Segment AB
drawLine(shipX+vertA[0], shipY+vertA[1], shipX+vertB [0], shipY+vertB [1], r, g, b);
//Segment AC
drawLine(shipX+vertA[0], shipY+vertA[1], shipX+vertC [0], shipY+vertC [1], r, g, b);
//Segment BC
drawLine(shipX+vertB[0], shipY+vertB[1], shipX+vertC [0], shipY+vertC [1], r, g, b);
}
void Game::ComposeFrame()
{
float pAlpha=psp.getAlpha();
if(kbd.LeftIsPressed()){
pAlpha-=1;
psp.setAlpha(pAlpha);
psp.rotateShip(psp.getAlpha());
}
if(kbd.RightIsPressed()){
pAlpha+=1;
psp.setAlpha(pAlpha);
psp.rotateShip(psp.getAlpha());
}
//Draw Playership
drawShip1(psp.getX(), psp.getY(), psp.getV(), psp.getAlpha(), psp.getSize(), psp.getVertA(), psp.getVertB(), psp.getVertC(), psp.getR(), psp.getG(), psp.getB());
drawLine(psp.getX(), psp.getY(), psp.getX()+psp.rotateX(0+psp.getSize(), 0, psp.getAlpha()), psp.getY()+psp.rotateY(0+psp.getSize(), 0, psp.getAlpha()), psp.getR(), psp.getG(), psp.getB());
}

I see several problems in your code which might add up to the result you are seeing.
The first thing is that you handle coordinates as integers. Due to this, decimal places get cut off and the value gets smaller and smaller. Use float or even double.
The second thing is that the rotation is applied in two steps. The first step (calculating the x coordinate) is correct. However, the second step (calculating the y coordinate) is errornous. You use the new x-coordinate, but should provide the old one. You should implement a method that rotates the complete vector and not part of them, anyway. Why don't you use the DirectX maths functions?

Related

OpenGL. Window-To-Viewport Transformation

I'm new to OpenGL. Hence I require some assistance in the matter described below. I'm not sure how to produce viewport coordinates with respect to screen coordinates as well as producing it in c++ since I used to deal with Java.
In this question I need to implement the function worldToViewportTransform.
The function implements a 2D orthographic projection matrix, which is used for the (world)window-to-viewport transformation. In OpenGL this matrix is defined by gluOrtho2D.
Input are the coordinates of the world-window (winLeft, winRight, winBottom, winTop), the top-left corner of the viewport (window) on the screen (windowX, windowY), and the size of the viewport (window) on the screen (windowWidth, windowHeight).
Output are the values A, B, C and D which constitute the world-to-viewport transformation.
The answer needs to use the function format below - copy it and fill out the missing code. The function uses pointer variables for the values A, B, C and D since Coderunner does not seem to accept C++ notation - the code segment below converts the pointer variables to double values and back, so you don't need to understand how pointers work.
void worldToViewportTransform (double winLeft, double winRight, double winBottom, double winTop,
int windowX, int windowY, int windowWidth, int windowHeight,
double* APtr, double* BPtr, double* CPtr, double* DPtr)
{
double A=*APtr, B=*BPtr, C=*CPtr, D=*DPtr;
<INSERT YOUR CODE HERE>
*APtr=A; *BPtr=B; *CPtr=C; *DPtr=D;
}
Particular Test case should produces the output:(u,v)=(-200,367)
//Code for Testing
double A, B, C, D;
double winLeft=1.5, winRight=4.5, winBottom=0.0, winTop=3.0;
int windowX=100, windowY=100, windowWidth=600, windowHeight=400;
worldToViewportTransform(winLeft, winRight, winBottom, winTop,
windowX, windowY, windowWidth, windowHeight,
&A, &B, &C, &D);
// Test cases
double x, y; // world coordinates
int u, v; // window coordinates
x=0.0f; y=1.0f;
u=(int) floor(A*x+C+0.5f);
v=(int) floor(B*y+D+0.5f);
printf("(u,v)=(%d,%d)",u,v);
The function implements a 2D orthographic projection matrix, which is used for the (world)window-to-viewport transformation. In OpenGL this matrix is defined by gluOrtho2D.
No! gluOrtho2D/glOrtho is not doing that. These functions setup a orthographic projection matrix, which purpose is to transform from view-space into clip-space.
Then an implicit clip-space to NDC-space transform happens behind the scenes.
Finally the NDC-space coordinates are transformed to window coordinates in the viewport range.
Input are the coordinates of the world-window (winLeft, winRight, winBottom, winTop), the top-left corner of the viewport (window) on the screen (windowX, windowY), and the size of the viewport (window) on the screen (windowWidth, windowHeight).
Your nomenclature seems a little bit off.
The usual convention is that the viewport defines the target rectangle within the window, specified in window-relative coordinates. In OpenGL the window coordinate (0,0) being the lower-left corner of the window.
Window coordinates are usually relative to its parent window; hence for top-level window relative to the screen coordinates. In usual screen coordinate systems (0,0) is the upper-left.
Output are the values A, B, C and D which constitute the world-to-viewport transformation.
It's unclear what you actually want, but my best educated guess is, that you want to recreate the OpenGL transformation chain. Of course if you're using shaders, everything could be done in there. But in practice you'll probably just want to follow the chain
r_clip = P · M · r_in
r_NDC = r_clip/r_clip.w
r_viewport = (r_NDC.xy + 1)*viewport.width_height/2 + viewport.xy
where P is your projection matrix, for example the matrix produced by glOrtho.

Arcball camera locked when parallel to up vector

I'm currently in the process of finishing the implementation for a camera that functions in the same way as the camera in Maya. The part I'm stuck in the tumble functionality.
The problem is the following: the tumble feature works fine so long as the position of the camera is not parallel with the up vector (currently defined to be (0, 1, 0)). As soon as the camera becomes parallel with this vector (so it is looking straight up or down), the camera locks in place and will only rotate around the up vector instead of continuing to roll.
This question has already been asked here, unfortunately there is no actual solution to the problem. For reference, I also tried updating the up vector as I rotated the camera, but the resulting behaviour is not what I require (the view rolls as a result of the new orientation).
Here's the code for my camera:
using namespace glm;
// point is the position of the cursor in screen coordinates from GLFW
float deltaX = point.x - mImpl->lastPos.x;
float deltaY = point.y - mImpl->lastPos.y;
// Transform from screen coordinates into camera coordinates
Vector4 tumbleVector = Vector4(-deltaX, deltaY, 0, 0);
Matrix4 cameraMatrix = lookAt(mImpl->eye, mImpl->centre, mImpl->up);
Vector4 transformedTumble = inverse(cameraMatrix) * tumbleVector;
// Now compute the two vectors to determine the angle and axis of rotation.
Vector p1 = normalize(mImpl->eye - mImpl->centre);
Vector p2 = normalize((mImpl->eye + Vector(transformedTumble)) - mImpl->centre);
// Get the angle and axis
float theta = 0.1f * acos(dot(p1, p2));
Vector axis = cross(p1, p2);
// Rotate the eye.
mImpl->eye = Vector(rotate(Matrix4(1.0f), theta, axis) * Vector4(mImpl->eye, 0));
The vector library I'm using is GLM. Here's a quick reference on the custom types used here:
typedef glm::vec3 Vector;
typedef glm::vec4 Vector4;
typedef glm::mat4 Matrix4;
typedef glm::vec2 Point2;
mImpl is a PIMPL that contains the following members:
Vector eye, centre, up;
Point2 lastPoint;
Here is what I think. It has something to do with the gimbal lock, that occurs with euler angles (and thus spherical coordinates).
If you exceed your minimal(0, -zoom,0) or maxima(0, zoom,0) you have to toggle a boolean. This boolean will tell you if you must treat deltaY positive or not.
It could also just be caused by a singularity, therefore just limit your polar angle values between 89.99° and -89.99°.
Your problem could be solved like this.
So if your camera is exactly above (0, zoom,0) or beneath (0, -zoom,0) of your object, than the camera only rolls.
(I am also assuming your object is at (0,0,0) and the up-vector is set to (0,1,0).)
There might be some mathematical trick to resolve this, I would do it with linear algebra though.
You need to introduce a new right-vector. If you make a cross product, you will get the camera-vector. Camera-vector = up-vector x camera-vector. Imagine these vectors start at (0,0,0), then easily, to get your camera position just do this subtraction (0,0,0)-(camera-vector).
So if you get some deltaX, you rotate towards the right-vector(around the up-vector) and update it.
Any influence of deltaX should not change your up-vector.
If you get some deltaY you rotate towards the up-vector(around the right-vector) and update it. (This has no influence on the right-vector).
https://en.wikipedia.org/wiki/Rotation_matrix at Rotation matrix from axis and angle you can find a important formula.
You say u is your vector you want to rotate around and theta is the amount you want to pivot. The size of theta is proportional to deltaX/Y.
For example: We got an input from deltaX, so we rotate around the up-vector.
up-vector:= (0,1,0)
right-vector:= (0,0,-1)
cam-vector:= (0,1,0)
theta:=-1*30° // -1 due to the positive mathematical direction of rotation
R={[cos(-30°),0,-sin(-30°)],[0,1,0],[sin(-30°),0,cos(-30°)]}
new-cam-vector=R*cam-vector // normal matrix multiplication
One thing is left to be done: Update the right-vector.
right-vector=camera-vector x up-vector .

Error control of directx camera rotation?

I use the mouse to control camera rotation in my program(using Directx 9.0c). Mouse X controls the camera to rotate around the Up Vector and Mouse Y controls the rotation around the Right Vector. Rotation caculation is as below:
void Camera::RotateCameraUp(float angle)
{
D3DXMATRIX RoMatrix;
D3DXMatrixRotationAxis(&RoMatrix, &vUp, angle);
D3DXVec3TransformCoord(&vLook, &vLook, &RoMatrix);
D3DXVec3TransformCoord(&vRight, &vRight, &RoMatrix);
}
void Camera::RotateCameraRight(float angle)
{
D3DXMATRIX RoMatrix;
D3DXMatrixRotationAxis(&RoMatrix, &vRight, angle);
D3DXVec3TransformCoord(&vLook, &vLook, &RoMatrix);
D3DXVec3TransformCoord(&vUp, &vUp, &RoMatrix);
}
It is supposed that rotation around Up or Right vector should not leads to rotation around the "LookAt" vector, but if I circle my mouse for a while and stop it at the starting point, rotation around the "LookAt" vector has happened. I think it's because of the error while caculating, but I don't know how to eliminate it or control it. Any idea?
This is a common problem. You apply many rotations, and over time, the rounding errors sum up. After a while, the three vectors vUp, vLook and vRight are not normalized and orthogonal anymore.
I would use one of two options:
1.
Don't store vLook and vRight; instead, just store 2 angles. Assuming x is right, y is top, z is back, store a) the angle between your view axis and the xz-Plane, and b) the angle between the projection of your view axis on the xz-Plane and the z-Axis or x-Axis. Update these angles according to mouse move and calculate vLook and vRight from them.
2.
Set the y-component of vRight to 0, as vRight should be in the xz-Plane. Then re-orthonormalize the vectors (you know the vectors should be perpendicular to each other and have length 1). So after calculating the new vLook and vRight, apply these corrections:
vRight.y = 0
vRight = Normalize(vRight)
vUp = Normalize(Cross(vLook, vRight))
vLook = Normalize(Cross(vRight, vUp))

How to rotate an object according to it's orientation

Similar question for WebGL: Rotate object around world axis .
I need to rotate an object in a way that the user should be able to move it with the mouse, like if he's dragging it. The problem is that glRotatef just rotates the object without taking account of it's orientation. For WebGL the solution was to use quaternions, but I guess that there aren't quaternions in OpenGL.
This is how I achieve a rotation for now:
// rotation 2D GLfloat C-vector, posX, posY GLfloat's
void mouse(int button, int state, int x, int y) {
if(button== GLUT_LEFT_BUTTON) {
if(state== GLUT_DOWN)
{
posX=x;
posY= y;
onEvent= true;
}
else if(state== GLUT_UP)
{
GLfloat dx= x-posX;
GLfloat dy= y-posY;
rotation[0]= -dy/5;
rotation[1]= dx/5;
onEvent= false;
glutPostRedisplay();
}
}
}
Then I handle the rotation in the display function:
glPushMatrix();
glRotatef(rotation[0],1,0,0);
glRotatef(rotation[1],0,1,0);
// Draw the object
glPopMatrix();
It kinda works, but like I said it should like if the user is able to drag the object to rotate it. Instead if for example the object is rotated of 90 degrees around the X axis, when the user drags the mouse horizontally to make it rotate around the Y axis, it rotates in the inverse direction. I need an idea here, how could I do that?
Edit
I tried to use glMultMatrixf, but the object doesn't rotate correctly: it gets scaled instead of rotating, this is the code I've edited in the mouse function:
// Global variables:
// GLfloat xRotationMatrix[4][4];
// GLfloat yRotationMatrix[4][4];
else if(state== GLUT_UP && onEvent)
{
GLfloat dx= (x-posX)/(180.0*5)*2.0*M_PI;
GLfloat dy= (y-posY)/(180.0*5)*2.0*M_PI;
// Computing rotations
double cosX= cos(dx);
double sinX= sin(dy);
double cosY= cos(dy);
double sinY= sin(dy);
// x axis rotation
xRotationMatrix[1][1]+= cosY;
xRotationMatrix[1][2]+=-sinY;
xRotationMatrix[2][2]+= sinY;
xRotationMatrix[2][2]+= cosY;
// y axis rotation
yRotationMatrix[0][0]+= cosX;
yRotationMatrix[0][2]+= sinX;
yRotationMatrix[2][0]+= -sinX;
yRotationMatrix[2][2]+= cosX;
onEvent= false;
glutPostRedisplay();
}
Then in the display function:
glPushMatrix();
glMultMatrixf((const GLfloat*)xRotationMatrix);
glMultMatrixf((const GLfloat*)yRotationMatrix);
glutSolidTeapot(10);
glPopMatrix();
This is the non rotated teapot:
If I drag the mouse horizontally to rotate the teapot around the y axis, instead of the rotation this is what I get:
First of all a bit of algebra.
Let v be a vector, M your current modelview matrix, and R the matrix associated with a glRotate command. Then, if you use glRotate, what you get is:
M * R * v
That means you are rotating around object axes. You want to rotate around the world axes, that is:
R * M * v
See the difference? Unfortunately GL doesn't have a MatrixPreMult function.
In modern OpenGL we don't use the matrix stack anymore, in fact while working with shaders we manually pass the transformation matrices to the GL program. What (most) people do is write/use an external vector algebra library (like Eigen).
One possible (untested) workaround which uses only the old deprecated GL stuffs may be something like this:
void rotate(float dx, float dy)
{
//assuming that GL_MATRIX_MODE is GL_MODELVIEW
float oldMatrix[4][4];
glGetFloatv(GL_MODELVIEW_MATRIX,oldMatrix);
glLoadIdentity();
glRotatef(-dy,1,0,0);
glRotatef(dx,0,1,0);
glMultMatrixf(oldMatrix);
}
And you put this code in your mouse function, not in the draw routine.
You can use this trick by keeping the view matrix in the GL matrix stack, then pushing/popping everytime you have to draw an object. I wouldn't recommend something like that in a large project.
Notice also that if you invert the order of the two glRotate calls in the code above you can get slightly different results, expecially if dx and dy are not small.
This code might be slightly better:
float angle = sqrt(dx*dx+dy*dy)*scalingFactor;
glRotate(angle,-dy,dx,0);

opengl - Rotating around a sphere using vectors and NOT glulookat

I'm having an issue with drawing a model and rotating it using the mouse,
I'm pretty sure there's a problem with the mathematics but not sure .
The object just rotates in a weird way.
I want the object to start rotating each click from its current spot and not reset because of the
vectors are now changed and the calculation starts all over again.
void DrawHandler::drawModel(Model * model){
unsigned int l_index;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); // Modeling transformation
glLoadIdentity();
Point tempCross;
crossProduct(tempCross,model->getBeginRotate(),model->getCurrRotate());
float tempInner= innerProduct(model->getBeginRotate(),model->getCurrRotate());
float tempNormA =normProduct(model->getBeginRotate());
float tempNormB=normProduct(model->getCurrRotate());
glTranslatef(0.0,0.0,-250.0);
glRotatef(acos (tempInner/(tempNormA*tempNormB)) * 180.0 / M_PI,tempCross.getX(),tempCross.getY(),tempCross.getZ());
glColor3d(1,1,1);
glBegin(GL_TRIANGLES);
for (l_index=0;l_index < model->getTrianglesDequeSize() ;l_index++)
{
Triangle t = model->getTriangleByPosition(l_index);
Vertex a1 = model->getVertexByPosition(t.getA());
Vertex a2 = model->getVertexByPosition(t.getB());
Vertex a3 = model->getVertexByPosition(t.getC());
glVertex3f( a1.getX(),a1.getY(),a1.getZ());
glVertex3f( a2.getX(),a2.getY(),a2.getZ());
glVertex3f( a3.getX(),a3.getY(),a3.getZ());
}
glEnd();
}
This is the mouse function which saves the beginning vector of the rotating formula
void Controller::mouse(int btn, int state, int x, int y)
{
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if (btn==GLUT_LEFT_BUTTON){
switch(state){
case(GLUT_DOWN):
if(!_rotating){
_model->setBeginRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
_rotating=true;
}
break;
case(GLUT_UP):
_rotating=false;
break;
}
}
}
and finally the following function which holds the current vector.
(the beginning vector is where the mouse was clicked at
and the curr vector is where the mouse position at the moment )
void Controller::getMousePosition(int x,int y){
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if(_rotating){
_model->setCurrRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
}
}
where sphereradius is the sphere radius O_O of 70 degress
is any calculation wrong ? cant seem to find the problem...
thanks
Why so complicated? Either you change the view matrix or you change the model matrix of your focused object. If you choose to change the model matrix and your object is centered in (0,0,0) of the world coordinate system, computing the rotation around a sphere illusion is trivial - you just rotate into the opposite direction. If you want to change the view matrix (which is actually done when you change the position of the camera) you have to approximate the surface points on the chosen sphere. Therefore, you could introduce two parameters specifying two angles. Everytime you click move your mouse, you update the params and compute the new locations on the sphere. There are some useful equations in [http://en.wikipedia.org/wiki/Sphere].
Without knowing what library (or libraries) you're using your code is rather difficult to read. It seems you're setting up your camera at (0, 0, -250), looking towards the origin, then rotating around the origin by the angle between two vectors, model->getCurrRotate() and model->getBeginRotate().
The problem seems to be that in "mouse down" events you explicitly set BeginRotate to the point on the sphere under the mouse, then in "mouse move" events you set CurrRotate to the point under the mouse, so every time you click somewhere else, you lose the previous state of rotation because BeginRotate and CurrRotate are simply overwritten.
Combining multiple rotations around arbitrary different axes is not a trivially simple task. The proper way to do it is to use quaternions. You may find this primer on quaternions and other 3D math concepts useful.
You might also want a more robust algorithm for converting screen coordinates to model coordinates on the sphere. The one you are using is assuming the sphere appears 70 pixels in radius on the screen and that the projection matrix is orthographic.