I'm currently implementing cameras on my engine and I'm having an issue when the camera is looking from top to the floor (example, eye position 0.,0.,50. and target is 0.,0.,0.) my up vector is 0.,0.,1..
Then when I do the maths, the crossproduct of position and up gives 0.,0.,0. and then the view is screwed and nothing is rendered. If I move the camera, everything works as expected.
How can I solve this?
if (node==NULL || target==NULL) return;
node->Update();
eye=node->GetWorldMatrix()*GRPVECTOR(0.0,0.0,0.0); //converts from matrix to vector my vector transformation
target->Update();
obj=target->GetWorldMatrix()*GRPVECTOR(0.0,0.0,0.0);
GRPVECTOR ev;
GRPVECTOR z;
GRPVECTOR x_tmp;
GRPVECTOR x;
GRPVECTOR y;
ev = eye - obj;
ev.Normalize();
z=ev;
x_tmp.CrossProduct(&up,&z);
if (x_tmp.GetLengthf()==0.0f)
return; //my view is screwed, I return
x_tmp.Normalize();
x=x_tmp;
y.CrossProduct(&z,&x);
this->viewmatrix.matrix[0][0] = x.vector[0];
this->viewmatrix.matrix[0][1] = y.vector[0];
this->viewmatrix.matrix[0][2] = z.vector[0];
this->viewmatrix.matrix[0][3] = 0.0f;
this->viewmatrix.matrix[1][0] = x.vector[1];
this->viewmatrix.matrix[1][1] = y.vector[1];
this->viewmatrix.matrix[1][2] = z.vector[1];
this->viewmatrix.matrix[1][3] = 0.0f;
this->viewmatrix.matrix[2][0] = x.vector[2];
this->viewmatrix.matrix[2][1] = y.vector[2];
this->viewmatrix.matrix[2][2] = z.vector[2];
this->viewmatrix.matrix[2][3] = 0.0f;
this->viewmatrix.matrix[3][0] = -x.vector[0] * eye.vector[0] + -x.vector[1] * eye.vector[1] + -x.vector[2] * eye.vector[2];
this->viewmatrix.matrix[3][1] = -y.vector[0] * eye.vector[0] + -y.vector[1] * eye.vector[1] + -y.vector[2] * eye.vector[2];
this->viewmatrix.matrix[3][2] = -z.vector[0] * eye.vector[0] + -z.vector[1] * eye.vector[1] + -z.vector[2] * eye.vector[2];
this->viewmatrix.matrix[3][3] = 1.0f;
GRPMATRIX Translate;
Translate.BuildTranslationMatrix(-obj.vector[0],-obj.vector[1],-obj.vector[2]);
this->viewmatrix.GetMulplicationMatrix(&this->viewmatrix,&Translate);
Related
I'm using the Visual Studio profiler for the first time and I'm trying to interpret the results. Looking at the percentages on the left, I found this subtraction's time cost a bit strange:
Other parts of the code contain more complex expressions, like:
Even a simple multiplication seems way faster than the subtraction :
Other multiplications take way longer and I really don't get why, like this :
So, I guess my question is if there is anything weird going on here.
Complex expressions take longer than that subtraction and some expressions take way longer than similar other ones. I run the profiler several times and the distribution of the percentages is always like this. Am I just interpreting this wrong?
Update:
I was asked to give the profile for the whole function so here it is, even though it's a bit big. I ran the function inside a for loop for 1 minute and got 50k samples. The function contains a double loop. I include the text first for ease, followed by the pictures of profiling. Note that the code in text is a bit updated.
for (int i = 0; i < NUMBER_OF_CONTOUR_POINTS; i++) {
vec4 contourPointV(contour3DPoints[i], 1);
float phi = angles[i];
float xW = pose[0][0] * contourPointV.x + pose[1][0] * contourPointV.y + contourPointV.z * pose[2][0] + pose[3][0];
float yW = pose[0][1] * contourPointV.x + pose[1][1] * contourPointV.y + contourPointV.z * pose[2][1] + pose[3][1];
float zW = pose[0][2] * contourPointV.x + pose[1][2] * contourPointV.y + contourPointV.z * pose[2][2] + pose[3][2];
float x = -G_FU_STRICT * xW / zW;
float y = -G_FV_STRICT * yW / zW;
x = (x + 1) * G_WIDTHo2;
y = (y + 1) * G_HEIGHTo2;
y = G_HEIGHT - y;
phi -= extraTheta;
if (phi < 0)phi += CV_PI2;
int indexForTable = phi * oneKoverPI;
//vec2 ray(cos(phi), sin(phi));
vec2 ray(cos_pre[indexForTable], sin_pre[indexForTable]);
vec2 ray2(-ray.x, -ray.y);
float outerStepX = ray.x * step;
float outerStepY = ray.y * step;
cv::Point2f outerPoint(x + outerStepX, y + outerStepY);
cv::Point2f innerPoint(x - outerStepX, y - outerStepY);
cv::Point2f contourPointCV(x, y);
cv::Point2f contourPointCVcopy(x, y);
bool cut = false;
if (!isInView(outerPoint.x, outerPoint.y) || !isInView(innerPoint.x, innerPoint.y)) {
cut = true;
}
bool outside2 = true; bool outside1 = true;
if (cut) {
outside2 = myClipLine(contourPointCV.x, contourPointCV.y, outerPoint.x, outerPoint.y, G_WIDTH - 1, G_HEIGHT - 1);
outside1 = myClipLine(contourPointCVcopy.x, contourPointCVcopy.y, innerPoint.x, innerPoint.y, G_WIDTH - 1, G_HEIGHT - 1);
}
myIterator innerRayMine(contourPointCVcopy, innerPoint);
myIterator outerRayMine(contourPointCV, outerPoint);
if (!outside1) {
innerRayMine.end = true;
innerRayMine.prob = true;
}
if (!outside2) {
outerRayMine.end = true;
innerRayMine.prob = true;
}
vec2 normal = -ray;
float dfdxTerm = -normal.x;
float dfdyTerm = normal.y;
vec3 point3D = vec3(xW, yW, zW);
cv::Point contourPoint((int)x, (int)y);
float Xc = point3D.x; float Xc2 = Xc * Xc; float Yc = point3D.y; float Yc2 = Yc * Yc; float Zc = point3D.z; float Zc2 = Zc * Zc;
float XcYc = Xc * Yc; float dfdxFu = dfdxTerm * G_FU; float dfdyFv = dfdyTerm * G_FU; float overZc2 = 1 / Zc2; float overZc = 1 / Zc;
pixelJacobi[0] = (dfdyFv * (Yc2 + Zc2) + dfdxFu * XcYc) * overZc2;
pixelJacobi[1] = (-dfdxFu * (Xc2 + Zc2) - dfdyFv * XcYc) * overZc2;
pixelJacobi[2] = (-dfdyFv * Xc + dfdxFu * Yc) * overZc;
pixelJacobi[3] = -dfdxFu * overZc;
pixelJacobi[4] = -dfdyFv * overZc;
pixelJacobi[5] = (dfdyFv * Yc + dfdxFu * Xc) * overZc2;
float commonFirstTermsSum = 0;
float commonFirstTermsSquaredSum = 0;
int test = 0;
while (!innerRayMine.end) {
test++;
cv::Point xy = innerRayMine.pos(); innerRayMine++;
int x = xy.x;
int y = xy.y;
float dx = x - contourPoint.x;
float dy = y - contourPoint.y;
vec2 dxdy(dx, dy);
float raw = -glm::dot(dxdy, normal);
float heavisideTerm = heaviside_pre[(int)raw * 100 + 1000];
float deltaTerm = delta_pre[(int)raw * 100 + 1000];
const Vec3b rgb = ante[y * 640 + x];
int red = rgb[0]; int green = rgb[1]; int blue = rgb[2];
red = red >> 3; red = red << 10; green = green >> 3; green = green << 5; blue = blue >> 3;
int colorIndex = red + green + blue;
pF = pFPointer[colorIndex];
pB = pBPointer[colorIndex];
float denAsMul = 1 / (pF + pB + 0.000001);
pF = pF * denAsMul;
float pfMinusPb = 2 * pF - 1;
float denominator = heavisideTerm * (pfMinusPb)+pB + 0.000001;
float commonFirstTerm = -pfMinusPb / denominator * deltaTerm;
commonFirstTermsSum += commonFirstTerm;
commonFirstTermsSquaredSum += commonFirstTerm * commonFirstTerm;
}
}
Visual Studio profiles by sampling: it interrupts execution often and records the value of the instruction pointer; it then maps it to the source and calculates the frequency of hitting that line.
There are few issues with that: it's not always possible to figure out which line produced a specific assembly instruction in the optimized code.
One trick I use is to move the code of interest into a separate function and declare it with __declspec(noinline) .
In your example, are you sure the subtraction was performed as many times as multiplication? I would be more puzzled by the difference in subsequent multiplication (0.39% and 0.53%)
Update:
I believe that the following lines:
float phi = angles[i];
and
phi -= extraTheta;
got moved together in assembly and the time spent getting angles[i] was added to that subtraction line.
What I'm trying to achieve is a sprite moving to another sprite in a 2D environment. I started with the basic Mx = Ax - Bx deal. But I noticed that the closer to the target the sprite gets, the more it slows down. So I tried to create a percentage/ratio based on the velocity then each x and y gets their percent of a speed allowance, however, it's acting very strangely and only seems to work if Mx and My are positive
Here's the code extract:
ballX = ball->GetX();
ballY = ball->GetY();
targX = target->GetX();
targY = target->GetY();
ballVx = (targX - ballX);
ballVy = (targY - ballY);
percentComp = (100 / (ballVx + ballVy));
ballVx = (ballVx * percentComp)/10000;
ballVy = (ballVy * percentComp)/10000;
The /10000 is to slow the sprites movement
Assuming you want the sprite to move at a constant speed, you can do a linear fade on both the X and Y position, like this:
#include <stdio.h>
int main(int, char **)
{
float startX = 10.0f, startY = 20.0f;
float endX = 35.0f, endY = -2.5f;
int numSteps = 20;
for (int i=0; i<numSteps; i++)
{
float percentDone = ((float)i)/(numSteps-1);
float curX = (startX*(1.0f-percentDone)) + (endX*percentDone);
float curY = (startY*(1.0f-percentDone)) + (endY*percentDone);
printf("Step %i: percentDone=%f curX=%f curY=%f\n", i, percentDone, curX, curY);
}
return 0;
}
Thanks for the responses, I got it working now but normalising the vectors instead of the whole percent thing, here's what I have now:
ballX = ball->GetX();
ballY = ball->GetY();
targX = target->GetX();
targY = target->GetY();
ballVx = (targX - ballX);
ballVy = (targY - ballY);
vectLength = sqrt((ballVx*ballVx) + (ballVy*ballVy));
ballVx = (ballVx / vectLength)/10;
ballVy = (ballVy / vectLength)/10;
I currently have a OpenGL sprite drawing class that buffers up a bunch of sprite data then dumps it with glDrawElements. The problem is, creating the sprites that go into the buffer is cumbersome as I have loads of parameters to pass into the buffer with even more redundancy for the shaders. I was wondering if I could reduce CPU load by only loading the buffer with the essentials, location, orientation, texture coordinates etc... and then let a geometry shader turn that nonsense into quads for the fragment shader.
If theres a different answer, I've added the offending method so you can see what I mean:
void Machine::draw(key num, BoundingBox loc, float angle){
SpriteSyncData* props;
VertexAttribArray* vdata;
GLushort* idata;
SpriteProperties* sprite_props;
int sliceW;
int sliceH;
sprite_props = &spriteList[num];
props = &spriteToSync[sprite_props->atlas];
props->size++;
if(props->size > props->capacity){
props->capacity += COARSE_MEM_SCALE;
props->data = (VertexAttribArray*) realloc((void*) props->data, (sizeof(VertexAttribArray)*4) * props->capacity);
props->i_data = (GLushort*) realloc((void*) props->i_data, (sizeof(GLushort)*4) * props->capacity);
}
vdata = props->data + (props->size - 1) * 4;
idata = props->i_data + (props->size - 1) * 4;
sliceW = sprite_props->location.x1 - sprite_props->location.x0;
sliceH = sprite_props->location.y1 - sprite_props->location.y0;
if(sprite_props->flags & DRAW_TILED){
vdata[0].p = QVector3D(loc.x1, loc.y0, UNIFORM_DEPTH);
vdata[1].p = QVector3D(loc.x0, loc.y0, UNIFORM_DEPTH);
vdata[2].p = QVector3D(loc.x0, loc.y1, UNIFORM_DEPTH);
vdata[3].p = QVector3D(loc.x1, loc.y1, UNIFORM_DEPTH);
vdata[0].s = QVector2D(((float) (loc.x1 - loc.x0)) / sliceW,
((float) (loc.y1 - loc.y0)) / sliceH);
vdata[0].r = QVector2D(0, 0);
vdata[1].r = vdata[0].r;
vdata[2].r = vdata[0].r;
vdata[3].r = vdata[0].r;
}
else{
vdata[0].p = QVector3D(loc.x0 + sliceW, loc.y0, UNIFORM_DEPTH);
vdata[1].p = QVector3D(loc.x0, loc.y0, UNIFORM_DEPTH);
vdata[2].p = QVector3D(loc.x0, loc.y0 + sliceH, UNIFORM_DEPTH);
vdata[3].p = QVector3D(loc.x0 + sliceW, loc.y0 + sliceH, UNIFORM_DEPTH);
vdata[0].s = QVector2D(1, 1);
vdata[0].r = QVector2D(sliceW, sliceH);
vdata[1].r = vdata[0].r;
vdata[2].r = vdata[0].r;
vdata[3].r = vdata[0].r;
}
vdata[0].t = QVector2D(sprite_props->texCoords[2], sprite_props->texCoords[1]);
vdata[1].t = QVector2D(sprite_props->texCoords[0], sprite_props->texCoords[1]);
vdata[2].t = QVector2D(sprite_props->texCoords[0], sprite_props->texCoords[3]);
vdata[3].t = QVector2D(sprite_props->texCoords[2], sprite_props->texCoords[3]);
vdata[1].s = vdata[0].s;
vdata[2].s = vdata[0].s;
vdata[3].s = vdata[0].s;
vdata[0].s_lo = QVector2D(sprite_props->texCoords[0], sprite_props->texCoords[1]);
vdata[0].s_hi = QVector2D(sprite_props->texCoords[2] - sprite_props->texCoords[0],
sprite_props->texCoords[3] - sprite_props->texCoords[1]);
vdata[1].s_lo = vdata[0].s_lo;
vdata[1].s_hi = vdata[0].s_hi;
vdata[2].s_lo = vdata[0].s_lo;
vdata[2].s_hi = vdata[0].s_hi;
vdata[3].s_lo = vdata[0].s_lo;
vdata[3].s_hi = vdata[0].s_hi;
vdata[0].o = (vdata[1].p + vdata[3].p) * 0.5;
vdata[1].o = vdata[0].o;
vdata[2].o = vdata[0].o;
vdata[3].o = vdata[0].o;
vdata[0].a = angle;
vdata[1].a = angle;
vdata[2].a = angle;
vdata[3].a = angle;
idata[0] = (props->size - 1) * 4;
idata[1] = idata[0] + 1;
idata[2] = idata[0] + 2;
idata[3] = idata[0] + 3;
}
I have written some code to preform 3D picking that for some reason dosn't work entirely correct! (Im using LWJGL just so you know.)
This is how the code looks like:
if(Mouse.getEventButton() == 1) {
if (!Mouse.getEventButtonState()) {
Camera.get().generateViewMatrix();
float screenSpaceX = ((Mouse.getX()/800f/2f)-1.0f)*Camera.get().getAspectRatio();
float screenSpaceY = 1.0f-(2*((600-Mouse.getY())/600f));
float displacementRate = (float)Math.tan(Camera.get().getFovy()/2);
screenSpaceX *= displacementRate;
screenSpaceY *= displacementRate;
Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * Camera.get().getNear()), (float) (screenSpaceY * Camera.get().getNear()), (float) (-Camera.get().getNear()), 1);
Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * Camera.get().getFar()), (float) (screenSpaceY * Camera.get().getFar()), (float) (-Camera.get().getFar()), 1);
Matrix4f tmpView = new Matrix4f();
Camera.get().getViewMatrix().transpose(tmpView);
Matrix4f invertedViewMatrix = (Matrix4f)tmpView.invert();
Vector4f worldSpaceNear = new Vector4f();
Matrix4f.transform(invertedViewMatrix, cameraSpaceNear, worldSpaceNear);
Vector4f worldSpaceFar = new Vector4f();
Matrix4f.transform(invertedViewMatrix, cameraSpaceFar, worldSpaceFar);
Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z);
Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z);
rayDirection.normalise();
Ray clickRay = new Ray(rayPosition, rayDirection);
Vector tMin = new Vector(), tMax = new Vector(), tempPoint;
float largestEnteringValue, smallestExitingValue, temp, closestEnteringValue = Camera.get().getFar()+0.1f;
Drawable closestDrawableHit = null;
for(Drawable d : this.worldModel.getDrawableThings()) {
// Calcualte AABB for each object... needs to be moved later...
firstVertex = true;
for(Surface surface : d.getSurfaces()) {
for(Vertex v : surface.getVertices()) {
worldPosition.x = (v.x+d.getPosition().x)*d.getScale().x;
worldPosition.y = (v.y+d.getPosition().y)*d.getScale().y;
worldPosition.z = (v.z+d.getPosition().z)*d.getScale().z;
worldPosition = worldPosition.rotate(d.getRotation());
if (firstVertex) {
maxX = worldPosition.x; maxY = worldPosition.y; maxZ = worldPosition.z;
minX = worldPosition.x; minY = worldPosition.y; minZ = worldPosition.z;
firstVertex = false;
} else {
if (worldPosition.x > maxX) {
maxX = worldPosition.x;
}
if (worldPosition.x < minX) {
minX = worldPosition.x;
}
if (worldPosition.y > maxY) {
maxY = worldPosition.y;
}
if (worldPosition.y < minY) {
minY = worldPosition.y;
}
if (worldPosition.z > maxZ) {
maxZ = worldPosition.z;
}
if (worldPosition.z < minZ) {
minZ = worldPosition.z;
}
}
}
}
// ray/slabs intersection test...
// clickRay.getOrigin().x + clickRay.getDirection().x * f = minX
// clickRay.getOrigin().x - minX = -clickRay.getDirection().x * f
// clickRay.getOrigin().x/-clickRay.getDirection().x - minX/-clickRay.getDirection().x = f
// -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x = f
largestEnteringValue = -clickRay.getOrigin().x/clickRay.getDirection().x + minX/clickRay.getDirection().x;
temp = -clickRay.getOrigin().y/clickRay.getDirection().y + minY/clickRay.getDirection().y;
if(largestEnteringValue < temp) {
largestEnteringValue = temp;
}
temp = -clickRay.getOrigin().z/clickRay.getDirection().z + minZ/clickRay.getDirection().z;
if(largestEnteringValue < temp) {
largestEnteringValue = temp;
}
smallestExitingValue = -clickRay.getOrigin().x/clickRay.getDirection().x + maxX/clickRay.getDirection().x;
temp = -clickRay.getOrigin().y/clickRay.getDirection().y + maxY/clickRay.getDirection().y;
if(smallestExitingValue > temp) {
smallestExitingValue = temp;
}
temp = -clickRay.getOrigin().z/clickRay.getDirection().z + maxZ/clickRay.getDirection().z;
if(smallestExitingValue < temp) {
smallestExitingValue = temp;
}
if(largestEnteringValue > smallestExitingValue) {
//System.out.println("Miss!");
} else {
if (largestEnteringValue < closestEnteringValue) {
closestEnteringValue = largestEnteringValue;
closestDrawableHit = d;
}
}
}
if(closestDrawableHit != null) {
System.out.println("Hit at: (" + clickRay.setDistance(closestEnteringValue).x + ", " + clickRay.getCurrentPosition().y + ", " + clickRay.getCurrentPosition().z);
this.worldModel.removeDrawableThing(closestDrawableHit);
}
}
}
I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but the result of the ray are verry strange it sometimes removes the thing im clicking at, sometimes it removes things thats not even close to what im clicking at, and sometimes it removes nothing at all.
Edit:
Okay so i have continued searching for errors and by debugging the ray (by painting smal dots where it travles) i can now se that there is something oviously wrong with the ray that im sending out... it has its origin near the world center and always shots to the same position no matter where i direct my camera...
My initial toughts is that there might be some error in the way i calculate my viewMatrix (since it's not possible to get the viewmatrix from the glulookat method in lwjgl; I have to build it my self and I guess thats where the problem is at)...
Edit2:
This is how i calculate it currently:
private double[][] viewMatrixDouble = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,1}};
public Vector getCameraDirectionVector() {
Vector actualEye = this.getActualEyePosition();
return new Vector(lookAt.x-actualEye.x, lookAt.y-actualEye.y, lookAt.z-actualEye.z);
}
public Vector getActualEyePosition() {
return eye.rotate(this.getRotation());
}
public void generateViewMatrix() {
Vector cameraDirectionVector = getCameraDirectionVector().normalize();
Vector side = Vector.cross(cameraDirectionVector, this.upVector).normalize();
Vector up = Vector.cross(side, cameraDirectionVector);
viewMatrixDouble[0][0] = side.x; viewMatrixDouble[0][1] = up.x; viewMatrixDouble[0][2] = -cameraDirectionVector.x;
viewMatrixDouble[1][0] = side.y; viewMatrixDouble[1][1] = up.y; viewMatrixDouble[1][2] = -cameraDirectionVector.y;
viewMatrixDouble[2][0] = side.z; viewMatrixDouble[2][1] = up.z; viewMatrixDouble[2][2] = -cameraDirectionVector.z;
/*
Vector actualEyePosition = this.getActualEyePosition();
Vector zaxis = new Vector(this.lookAt.x - actualEyePosition.x, this.lookAt.y - actualEyePosition.y, this.lookAt.z - actualEyePosition.z).normalize();
Vector xaxis = Vector.cross(upVector, zaxis).normalize();
Vector yaxis = Vector.cross(zaxis, xaxis);
viewMatrixDouble[0][0] = xaxis.x; viewMatrixDouble[0][1] = yaxis.x; viewMatrixDouble[0][2] = zaxis.x;
viewMatrixDouble[1][0] = xaxis.y; viewMatrixDouble[1][1] = yaxis.y; viewMatrixDouble[1][2] = zaxis.y;
viewMatrixDouble[2][0] = xaxis.z; viewMatrixDouble[2][1] = yaxis.z; viewMatrixDouble[2][2] = zaxis.z;
viewMatrixDouble[3][0] = -Vector.dot(xaxis, actualEyePosition); viewMatrixDouble[3][1] =-Vector.dot(yaxis, actualEyePosition); viewMatrixDouble[3][2] = -Vector.dot(zaxis, actualEyePosition);
*/
viewMatrix = new Matrix4f();
viewMatrix.load(getViewMatrixAsFloatBuffer());
}
Would be verry greatfull if anyone could verify if this is wrong or right, and if it's wrong; supply me with the right way of doing it...
I have read alot of threads and documentations about this but i can't seam to wrapp my head around it...
I just don't understand what's wrong, the ray are shooting and i do hit stuff that gets removed but things are not disappearing where i press on the screen.
OpenGL is not a scene graph, it's a drawing library. So after removing something from your internal representation you must redraw the scene. And your code is missing some call to a function that triggers a redraw.
Okay so i finaly solved it with the help from the guys at gamedev and a friend, here is a link to the answer where i have posted the code!
I'm trying to create a bone and IK system. Below is the method that is recursive and that calculates the absolute positions and absolute angles of each bone. I call it with the root bone and zero'd parameters. It works fine, but when I try to use CCD IK I get discrepancies between the resulting end point and the calculated one. Therefore maybe I'm doing this wrong even though it works.
Thanks
void Skeleton::_updateBones( Bone* root,float realStartX, float realStartY, float realStartAngle )
{
if(!root->isRelative())
{
realStartX = 0.0f;
realStartY = 0.0f;
realStartAngle = 0.0f;
}
realStartX += root->getX();
realStartY += root->getY();
realStartAngle += root->getAngle();
float vecX = sin(realStartAngle);
float vecY = cos(realStartAngle);
realStartX += (vecX * root->getLength());
realStartY += (vecY * root->getLength());
root->setFrame(realStartX,realStartY,realStartAngle);
float angle = fmod(realStartAngle,2.0f * 3.141592f);
if( angle < -3.141592f )
angle += (2.0f * 3.141592);
else if( angle > 3.141592f )
angle -= (2.0f * 3.141592f);
for(std::list<Bone>::iterator it = root->begin(); it != root->end(); ++it)
{
_updateBones(&(*it),realStartX,realStartY,angle);
}
}
This looks wrong.
float vecX = sin(realStartAngle);
float vecY = cos(realStartAngle);
Swap sin() and cos().
float vecX = cos(realStartAngle);
float vecY = sin(realStartAngle);