Projected Shadow with shadow matrix, simple test fails - opengl

I wrote a little program to test how projected shadows work.
I wanted to check in particular the case where the point to project (it could be the vertex of a triangle) is not situated between the light source and the plane but behind the light itself, that is the light is between the point and the plane.
The problem is that my little program is not even working in the case where the point is between the light and plane. I checked the calculations tens of times, so I guess the error should be logic, but I cant find it..
Here the code
public class test {
int x = 0;
int y = 1;
int z = 2;
int w = 3;
float floor[][] = {
{-100.0f, -100.0f, 0.0f},
{100.0f, -100.0f, 0.0f},
{100.0f, 100.0f, 0.0f},
{-100.0f, 100.0f, 0.0f}};
private float shadow_floor[] = new float[16];
float light_position[] = {0.0f, 0.0f, 10.0f, 1.0f};
public test() {
//Find floorplane based on thre known points
float plane_floor[] = calculatePlane(floor[1], floor[2], floor[3]);
//store shadowMatrix for floor
shadow_floor = shadowMatrix(plane_floor, light_position);
float[] point = new float[]{1.0f, 0.0f, 5.0f, 1.0f};
float[] projectedPoint = pointFmatrixF(point, shadow_floor);
System.out.println("point: (" + point[x] + ", " + point[y] + ", " + point[z] + ", "
+ point[w] + ")");
System.out.println("projectedPoint: (" + projectedPoint[x] + ", " + projectedPoint[y]
+ ", " + projectedPoint[z] + ", " + projectedPoint[w] + ")");
}
public static void main(String args[]) {
test test = new test();
}
// make shadow matrix
public float[] shadowMatrix(float plane[], float light_pos[]) {
float shadow_mat[] = new float[16];
float dot;
dot = plane[x] * light_pos[x] + plane[y] * light_pos[y]
+ plane[z] * light_pos[z] + plane[w] * light_pos[w];
shadow_mat[0] = dot - light_pos[x] * plane[x];
shadow_mat[4] = -light_pos[x] * plane[y];
shadow_mat[8] = -light_pos[x] * plane[z];
shadow_mat[12] = -light_pos[x] * plane[3];
shadow_mat[1] = -light_pos[y] * plane[x];
shadow_mat[5] = dot - light_pos[y] * plane[y];
shadow_mat[9] = -light_pos[y] * plane[z];
shadow_mat[13] = -light_pos[y] * plane[w];
shadow_mat[2] = -light_pos[z] * plane[x];
shadow_mat[6] = -light_pos[z] * plane[y];
shadow_mat[10] = dot - light_pos[z] * plane[z];
shadow_mat[14] = -light_pos[z] * plane[w];
shadow_mat[3] = -light_pos[w] * plane[x];
shadow_mat[7] = -light_pos[w] * plane[y];
shadow_mat[11] = -light_pos[w] * plane[z];
shadow_mat[15] = dot - light_pos[w] * plane[w];
return shadow_mat;
}
public float[] calculatePlane(float p1[], float p2[], float p3[]) {
//Array for planlikningen
float plane[] = new float[4];
//Gitt to vektorer (tre punkter) i planet kan normalen regnes ut
//Vi vil ha aboluttverdier
plane[x] = Math.abs(((p2[y] - p1[y]) * (p3[z] - p1[z])) - ((p2[z] - p1[z])
* (p3[y] - p1[y])));
plane[y] = Math.abs(((p2[z] - p1[z]) * (p3[x] - p1[x])) - ((p2[x] - p1[x])
* (p3[z] - p1[z])));
plane[z] = Math.abs(((p2[x] - p1[x]) * (p3[y] - p1[y])) - ((p2[y] - p1[y])
* (p3[x] - p1[x])));
plane[w] = -(plane[x] * p1[x] + plane[y] * p1[y] + plane[z] * p1[z]);
return plane;
}
public float[] pointFmatrixF(float[] point, float[] matrix) {
int x = 0;
int y = 1;
int z = 2;
float[] transformedPoint = new float[4];
transformedPoint[x] =
matrix[0] * point[x]
+ matrix[4] * point[y]
+ matrix[8] * point[z]
+ matrix[12];
transformedPoint[y] =
matrix[1] * point[x]
+ matrix[5] * point[y]
+ matrix[9] * point[z]
+ matrix[13];
transformedPoint[z] =
matrix[2] * point[x]
+ matrix[6] * point[y]
+ matrix[10] * point[z]
+ matrix[14];
transformedPoint[w] = 1;
return transformedPoint;
}
}
If the plane is an xy plane, the light is on (0, 0, 10) and the point on (1, 0, 5) then the projected point on the plane should be (2, 0, 0), but the program is returning (400000.0, 0.0, 0.0, 1.0)

Solved, I was assuming incorrectly that the last coordinate of the projected point was 1, but it wasn't.
https://math.stackexchange.com/questions/320527/projecting-a-point-on-a-plane-through-a-matrix

Related

Interpreting visual studio profiler, is this subtraction slow? Can I make all this any faster?

I'm using the Visual Studio profiler for the first time and I'm trying to interpret the results. Looking at the percentages on the left, I found this subtraction's time cost a bit strange:
Other parts of the code contain more complex expressions, like:
Even a simple multiplication seems way faster than the subtraction :
Other multiplications take way longer and I really don't get why, like this :
So, I guess my question is if there is anything weird going on here.
Complex expressions take longer than that subtraction and some expressions take way longer than similar other ones. I run the profiler several times and the distribution of the percentages is always like this. Am I just interpreting this wrong?
Update:
I was asked to give the profile for the whole function so here it is, even though it's a bit big. I ran the function inside a for loop for 1 minute and got 50k samples. The function contains a double loop. I include the text first for ease, followed by the pictures of profiling. Note that the code in text is a bit updated.
for (int i = 0; i < NUMBER_OF_CONTOUR_POINTS; i++) {
vec4 contourPointV(contour3DPoints[i], 1);
float phi = angles[i];
float xW = pose[0][0] * contourPointV.x + pose[1][0] * contourPointV.y + contourPointV.z * pose[2][0] + pose[3][0];
float yW = pose[0][1] * contourPointV.x + pose[1][1] * contourPointV.y + contourPointV.z * pose[2][1] + pose[3][1];
float zW = pose[0][2] * contourPointV.x + pose[1][2] * contourPointV.y + contourPointV.z * pose[2][2] + pose[3][2];
float x = -G_FU_STRICT * xW / zW;
float y = -G_FV_STRICT * yW / zW;
x = (x + 1) * G_WIDTHo2;
y = (y + 1) * G_HEIGHTo2;
y = G_HEIGHT - y;
phi -= extraTheta;
if (phi < 0)phi += CV_PI2;
int indexForTable = phi * oneKoverPI;
//vec2 ray(cos(phi), sin(phi));
vec2 ray(cos_pre[indexForTable], sin_pre[indexForTable]);
vec2 ray2(-ray.x, -ray.y);
float outerStepX = ray.x * step;
float outerStepY = ray.y * step;
cv::Point2f outerPoint(x + outerStepX, y + outerStepY);
cv::Point2f innerPoint(x - outerStepX, y - outerStepY);
cv::Point2f contourPointCV(x, y);
cv::Point2f contourPointCVcopy(x, y);
bool cut = false;
if (!isInView(outerPoint.x, outerPoint.y) || !isInView(innerPoint.x, innerPoint.y)) {
cut = true;
}
bool outside2 = true; bool outside1 = true;
if (cut) {
outside2 = myClipLine(contourPointCV.x, contourPointCV.y, outerPoint.x, outerPoint.y, G_WIDTH - 1, G_HEIGHT - 1);
outside1 = myClipLine(contourPointCVcopy.x, contourPointCVcopy.y, innerPoint.x, innerPoint.y, G_WIDTH - 1, G_HEIGHT - 1);
}
myIterator innerRayMine(contourPointCVcopy, innerPoint);
myIterator outerRayMine(contourPointCV, outerPoint);
if (!outside1) {
innerRayMine.end = true;
innerRayMine.prob = true;
}
if (!outside2) {
outerRayMine.end = true;
innerRayMine.prob = true;
}
vec2 normal = -ray;
float dfdxTerm = -normal.x;
float dfdyTerm = normal.y;
vec3 point3D = vec3(xW, yW, zW);
cv::Point contourPoint((int)x, (int)y);
float Xc = point3D.x; float Xc2 = Xc * Xc; float Yc = point3D.y; float Yc2 = Yc * Yc; float Zc = point3D.z; float Zc2 = Zc * Zc;
float XcYc = Xc * Yc; float dfdxFu = dfdxTerm * G_FU; float dfdyFv = dfdyTerm * G_FU; float overZc2 = 1 / Zc2; float overZc = 1 / Zc;
pixelJacobi[0] = (dfdyFv * (Yc2 + Zc2) + dfdxFu * XcYc) * overZc2;
pixelJacobi[1] = (-dfdxFu * (Xc2 + Zc2) - dfdyFv * XcYc) * overZc2;
pixelJacobi[2] = (-dfdyFv * Xc + dfdxFu * Yc) * overZc;
pixelJacobi[3] = -dfdxFu * overZc;
pixelJacobi[4] = -dfdyFv * overZc;
pixelJacobi[5] = (dfdyFv * Yc + dfdxFu * Xc) * overZc2;
float commonFirstTermsSum = 0;
float commonFirstTermsSquaredSum = 0;
int test = 0;
while (!innerRayMine.end) {
test++;
cv::Point xy = innerRayMine.pos(); innerRayMine++;
int x = xy.x;
int y = xy.y;
float dx = x - contourPoint.x;
float dy = y - contourPoint.y;
vec2 dxdy(dx, dy);
float raw = -glm::dot(dxdy, normal);
float heavisideTerm = heaviside_pre[(int)raw * 100 + 1000];
float deltaTerm = delta_pre[(int)raw * 100 + 1000];
const Vec3b rgb = ante[y * 640 + x];
int red = rgb[0]; int green = rgb[1]; int blue = rgb[2];
red = red >> 3; red = red << 10; green = green >> 3; green = green << 5; blue = blue >> 3;
int colorIndex = red + green + blue;
pF = pFPointer[colorIndex];
pB = pBPointer[colorIndex];
float denAsMul = 1 / (pF + pB + 0.000001);
pF = pF * denAsMul;
float pfMinusPb = 2 * pF - 1;
float denominator = heavisideTerm * (pfMinusPb)+pB + 0.000001;
float commonFirstTerm = -pfMinusPb / denominator * deltaTerm;
commonFirstTermsSum += commonFirstTerm;
commonFirstTermsSquaredSum += commonFirstTerm * commonFirstTerm;
}
}
Visual Studio profiles by sampling: it interrupts execution often and records the value of the instruction pointer; it then maps it to the source and calculates the frequency of hitting that line.
There are few issues with that: it's not always possible to figure out which line produced a specific assembly instruction in the optimized code.
One trick I use is to move the code of interest into a separate function and declare it with __declspec(noinline) .
In your example, are you sure the subtraction was performed as many times as multiplication? I would be more puzzled by the difference in subsequent multiplication (0.39% and 0.53%)
Update:
I believe that the following lines:
float phi = angles[i];
and
phi -= extraTheta;
got moved together in assembly and the time spent getting angles[i] was added to that subtraction line.

Manipulating sfml Vertex Array

I am doing research on the sfml Vertex Array functions.Based on this tutorial I've been introduced to a basic implementation and am wanting to add to it. Unfortunately I am relatively new at OOP and would appreciate any help adding to this.
The output generates a checkerboard like pattern using a sprite grid.
My goal is to connect the grid-floor tiles using a pathfinding algorithm(recursive bactracker) to generate a path.
the rest of this part is instantiated in the main.cpp:
//load the texture for our background vertex array
Texture textureBackground;
textureBackground.loadFromFile("graphics/background_sheet.png");
once in the game loop as:
//pass the vertex array by reference to the createBackground function
int tileSize = createBackground(background, arena);
and finally in the draw scene:
window.draw(background, &textureBackground);
#include "stdafx.h"
#include <SFML/Graphics.hpp>
#include "zArena.h"
int createBackground(VertexArray& rVA, IntRect arena)
{
// Anything we do to rVA we are actually doing to background (in the main function)
// How big is each tile/texture
const int TILE_SIZE = 50;
const int TILE_TYPES = 3;
const int VERTS_IN_QUAD = 4;
int worldWidth = arena.width / TILE_SIZE;
int worldHeight = arena.height / TILE_SIZE;
// What type of primitive are we using?
rVA.setPrimitiveType(Quads);
// Set the size of the vertex array
rVA.resize(worldWidth * worldHeight * VERTS_IN_QUAD);
// Start at the beginning of the vertex array
int currentVertex = 0;
for (int w = 0; w < worldWidth; w++)
{
for (int h = 0; h < worldHeight; h++)
{
// Position each vertex in the current quad
rVA[currentVertex + 0].position = Vector2f(w * TILE_SIZE, h * TILE_SIZE);
rVA[currentVertex + 1].position = Vector2f((w * TILE_SIZE) + TILE_SIZE, h * TILE_SIZE);
rVA[currentVertex + 2].position = Vector2f((w * TILE_SIZE) + TILE_SIZE, (h * TILE_SIZE) + TILE_SIZE);
rVA[currentVertex + 3].position = Vector2f((w * TILE_SIZE), (h * TILE_SIZE) + TILE_SIZE);
// Define the position in the Texture to draw for current quad
// Either mud, stone, grass or wall
//if (h == 0 || h == worldHeight - 1 || w == 0 || w == worldWidth - 1)
if ((h % 2 !=0)&& (w % 2 != 0))
{
// Use the wall texture
rVA[currentVertex + 0].texCoords = Vector2f(0, 0 + TILE_TYPES * TILE_SIZE);
rVA[currentVertex + 1].texCoords = Vector2f(TILE_SIZE, 0 + TILE_TYPES * TILE_SIZE);
rVA[currentVertex + 2].texCoords = Vector2f(TILE_SIZE, TILE_SIZE + TILE_TYPES * TILE_SIZE);
rVA[currentVertex + 3].texCoords = Vector2f(0, TILE_SIZE + TILE_TYPES * TILE_SIZE);
}
else
{
// Use a random floor texture
srand((int)time(0) + h * w - h);
int mOrG = (rand() % TILE_TYPES);
int verticalOffset = mOrG * TILE_SIZE;
//int verticalOffset = 0;
rVA[currentVertex + 0].texCoords = Vector2f(0, 0 + verticalOffset);
rVA[currentVertex + 1].texCoords = Vector2f(TILE_SIZE, 0 + verticalOffset);
rVA[currentVertex + 2].texCoords = Vector2f(TILE_SIZE, TILE_SIZE + verticalOffset);
rVA[currentVertex + 3].texCoords = Vector2f(0, TILE_SIZE + verticalOffset);
}
// Position ready for the next for vertices
currentVertex = currentVertex + VERTS_IN_QUAD;
}
}
return TILE_SIZE;
}
As far as i see, there you're generating your tiles on the fly. If you want to create something like a walkable space, you should generate your tile map first, and then draw it based on the content generated.
Maybe overkilling your question, there are several ways to generate random maps satisfying specific constraints.
When you have the choice done, then you can simply draw as you do, but instead of
// Use a random floor texture
srand((int)time(0) + h * w - h);
int mOrG = (rand() % TILE_TYPES);
int verticalOffset = mOrG * TILE_SIZE;
should have something like
// Select texture rect based on generated tilemap
int mOrG = tilemap[w][h]; // Or tilemap[h * worldWidth + w] if you do it as unidimensional array
int verticalOffset = mOrG * TILE_SIZE;
With this approach you must pass tilemap to your render method or, even better, create a TileMap class overriding draw() method

Quaternion rotation does not work

I want to make a quaternion based camera. In the internet I found this :
https://www.gamedev.net/resources/_/technical/math-and-physics/a-simple-quaternion-based-camera-r1997
From which I took the code :
typedef struct { float w, x, y, z; } quaternion;
double length(quaternion quat)
{
return sqrt(quat.x * quat.x + quat.y * quat.y +
quat.z * quat.z + quat.w * quat.w);
}
quaternion normalize(quaternion quat)
{
double L = length(quat);
quat.x /= L;
quat.y /= L;
quat.z /= L;
quat.w /= L;
return quat;
}
quaternion conjugate(quaternion quat)
{
quat.x = -quat.x;
quat.y = -quat.y;
quat.z = -quat.z;
return quat;
}
quaternion mult(quaternion A, quaternion B)
{
quaternion C;
C.x = A.w*B.x + A.x*B.w + A.y*B.z - A.z*B.y;
C.y = A.w*B.y - A.x*B.z + A.y*B.w + A.z*B.x;
C.z = A.w*B.z + A.x*B.y - A.y*B.x + A.z*B.w;
C.w = A.w*B.w - A.x*B.x - A.y*B.y - A.z*B.z;
return C;
}
void RotateCamera(double Angle, double x, double y, double z)
{
quaternion temp, quat_view, result;
temp.x = x * sin(Angle/2);
temp.y = y * sin(Angle/2);
temp.z = z * sin(Angle/2);
temp.w = cos(Angle/2);
quat_view.x = View.x;
quat_view.y = View.y;
quat_view.z = View.z;
quat_view.w = 0;
result = mult(mult(temp, quat_view), conjugate(temp));
View.x = result.x;
View.y = result.y;
View.z = result.z;
}
But Im having problems when trying to implement this line :
gluLookAt(Position.x, Position.y, Position.z,
View.x, View.y, View.z, Up.x, Up.y, Up.z).
because I have no idea of what to use as 'Up', I tried with 0,0,0, but it only showed a black screen. Any help is greatly appreciated !
EDIT :
Somewhere on this site I found something like that that does convert a quaternion to a matrix. How can I use this matrix using glMultMatrixf();
float *quat_to_matrix(quaternion quat) {
float matrix[16];
double qx=quat.x;
double qy=quat.y;
double qz=quat.z;
double qw=quat.w;
const double n = 1.0f/sqrt(qx*qx+qy*qy+qz*qz+qw*qw);
qx *= n;
qy *= n;
qz *= n;
qw *= n;
matrix={1.0f - 2.0f*qy*qy - 2.0f*qz*qz, 2.0f*qx*qy - 2.0f*qz*qw, 2.0f*qx*qz + 2.0f*qy*qw, 0.0f,
2.0f*qx*qy + 2.0f*qz*qw, 1.0f - 2.0f*qx*qx - 2.0f*qz*qz, 2.0f*qy*qz - 2.0f*qx*qw, 0.0f,
2.0f*qx*qz - 2.0f*qy*qw, 2.0f*qy*qz + 2.0f*qx*qw, 1.0f - 2.0f*qx*qx - 2.0f*qy*qy, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f};
return matrix;
}
EDIT 2 :
I used glMultMatrixf() and it worked. But I finally found out, that the output of RotateCamera() makes my Quaternion zero ? Does anybody know whats wrong with this method :
void RotateCamera(double Angle, double x, double y, double z)
{
quaternion temp, quat_view, result;
temp.x = x * sin(Angle/2);
temp.y = y * sin(Angle/2);
temp.z = z * sin(Angle/2);
temp.w = cos(Angle/2);
quat_view.x = View.x;
quat_view.y = View.y;
quat_view.z = View.z;
quat_view.w = 0;
result = mult(mult(temp, quat_view), conjugate(temp));
View.x = result.x;
View.y = result.y;
View.z = result.z;
}
It doesn't really make sense to me , but I will try to answer anyway :D ... why don't you just rotate it using glRotatef(angle,0,0,1) for rotation of the z axis , since the the definition of this function it is as follows : glRotatef(angle,x_axis,y_axis,z_axis) the last 3 parameters clamp to [0,1].
For the second question , from what I know you should decrement the angle, you can anyway experiment with the function to see for yourself ;) .

DirectX/C++: Marching Cubes Indexing

I've implemented the Marching Cube algorithm in a DirectX environment (To test and have fun). Upon completion, I noticed that the resulting model looks heavily distorted, as if the indices were off.
I've attempted to extract the indices, but I think the vertices are ordered correctly already, using the lookup tables, examples at http://paulbourke.net/geometry/polygonise/ . The current build uses a 15^3 volume.
Marching cubes iterates over the array as normal:
for (float iX = 0; iX < CellFieldSize.x; iX++){
for (float iY = 0; iY < CellFieldSize.y; iY++){
for (float iZ = 0; iZ < CellFieldSize.z; iZ++){
MarchCubes(XMFLOAT3(iX*StepSize, iY*StepSize, iZ*StepSize), StepSize);
}
}
}
The MarchCube function is called:
void MC::MarchCubes(){
...
int Corner, Vertex, VertexTest, Edge, Triangle, FlagIndex, EdgeFlags;
float Offset;
XMFLOAT3 Color;
float CubeValue[8];
XMFLOAT3 EdgeVertex[12];
XMFLOAT3 EdgeNorm[12];
//Local copy
for (Vertex = 0; Vertex < 8; Vertex++) {
CubeValue[Vertex] = (this->*fSample)(
in_Position.x + VertexOffset[Vertex][0] * Scale,
in_Position.y + VertexOffset[Vertex][1] * Scale,
in_Position.z + VertexOffset[Vertex][2] * Scale
);
}
FlagIndex = 0;
Intersection calculations:
...
//Test vertices for intersection.
for (VertexTest = 0; VertexTest < 8; VertexTest++){
if (CubeValue[VertexTest] <= TargetValue)
FlagIndex |= 1 << VertexTest;
}
//Find which edges are intersected by the surface.
EdgeFlags = CubeEdgeFlags[FlagIndex];
if (EdgeFlags == 0){
return;
}
for (Edge = 0; Edge < 12; Edge++){
if (EdgeFlags & (1 << Edge)) {
Offset = GetOffset(CubeValue[EdgeConnection[Edge][0]], CubeValue[EdgeConnection[Edge][1]], TargetValue); // Get offset function definition. Needed!
EdgeVertex[Edge].x = in_Position.x + VertexOffset[EdgeConnection[Edge][0]][0] + Offset * EdgeDirection[Edge][0] * Scale;
EdgeVertex[Edge].y = in_Position.y + VertexOffset[EdgeConnection[Edge][0]][1] + Offset * EdgeDirection[Edge][1] * Scale;
EdgeVertex[Edge].z = in_Position.z + VertexOffset[EdgeConnection[Edge][0]][2] + Offset * EdgeDirection[Edge][2] * Scale;
GetNormal(EdgeNorm[Edge], EdgeVertex[Edge].x, EdgeVertex[Edge].y, EdgeVertex[Edge].z); //Need normal values
}
}
And the original implementation gets pushed into a holding struct for DirectX.
for (Triangle = 0; Triangle < 5; Triangle++) {
if (TriangleConnectionTable[FlagIndex][3 * Triangle] < 0) break;
for (Corner = 0; Corner < 3; Corner++) {
Vertex = TriangleConnectionTable[FlagIndex][3 * Triangle + Corner];3 * Triangle + Corner]);
GetColor(Color, EdgeVertex[Vertex], EdgeNorm[Vertex]);
Data.VertexData.push_back(XMFLOAT3(EdgeVertex[Vertex].x, EdgeVertex[Vertex].y, EdgeVertex[Vertex].z));
Data.NormalData.push_back(XMFLOAT3(EdgeNorm[Vertex].x, EdgeNorm[Vertex].y, EdgeNorm[Vertex].z));
Data.ColorData.push_back(XMFLOAT4(Color.x, Color.y, Color.z, 1.0f));
}
}
(This is the same ordering as the original GL implementation)
Turns out, I missed a parenthesis showing operator precedence.
EdgeVertex[Edge].x = in_Position.x + (VertexOffset[EdgeConnection[Edge][0]][0] + Offset * EdgeDirection[Edge][0]) * Scale;
EdgeVertex[Edge].y = in_Position.y + (VertexOffset[EdgeConnection[Edge][0]][1] + Offset * EdgeDirection[Edge][1]) * Scale;
EdgeVertex[Edge].z = in_Position.z + (VertexOffset[EdgeConnection[Edge][0]][2] + Offset * EdgeDirection[Edge][2]) * Scale;
Corrected, obtained Visine; resumed fun.

Triangle rotation causes deformation

I would like to rotate my triangle but there are some problems.
Its default form:
I am rotating it with my arrow keys but as you see it has some deformations on shape of triangle:
Here is my code:
typedef struct {
point_t pos; // position of the triangle
float angle; // view angle
float r;
} weapon_t;
void drawPlayer(weapon_t tw) {
glBegin(GL_TRIANGLES);
glColor3f(0.1, 0.2, 0.3);
glVertex2f(tw.pos.x, tw.pos.y);
glVertex2f(tw.pos.x + 150 * cos(tw.angle * D2R), tw.pos.y + 100 * sin(tw.angle * D2R) + 8);
glVertex2f(tw.pos.x + 150 * cos(tw.angle * D2R), tw.pos.y + 100 * sin(tw.angle * D2R) - 8);
glEnd();
}
void onTimer(int v) {
glutTimerFunc(TIMER_PERIOD, onTimer, 0);
if (right) {
if (weapon.angle != -45)
turnWeapon(&weapon, -3);
}
if (left) {
if (weapon.angle != 45)
turnWeapon(&weapon, 3);
}
Any idea guys?
I don't know where you got your formulas from but they are wrong. To rotate a 2D vector anti-clockwise around angle x you can use the rotation matrix [cos(x), -sin(x) ; sin(x), cos(x)] (you can prove this easily with exp(i*x) = cos(x) + i*sin(x)). You want to rotate the vectors [150, 108] and [150, 92] if you multiply those by the rotation matrix you get [150*cos(x) - 108*sin(x), 150*sin(x) + 108*cos(x)] and [150*cos(x) - 92*sin(x), 150*sin(x) + 92*cos(x)].
Translated into code this looks like this:
float c = cos(tw.angle * D2R);
float s = sin(tw.angle * D2R);
glVertex2f(tw.pos.x + 150*c - 108*s, tw.pos.y + 150*s + 108*c);
glVertex2f(tw.pos.x + 150*c - 92*s, tw.pos.y + 150*s + 92*c);