I'm trying to implement CCD Inverse Kinematics in 2D
This function is supposed to do 1 iteration of CCD
Right now as a test case I start it on a left foot and have it stop at the pelvis.
every time this function is called, the skeleton's bones are updated.
The way my bones work is:
getFrameX,Y,Angle return the absolute positions of the end of the bone / effector. These are updated every iteraton of CCD.
getAngle,X, Y returns the relative values.
Same for setters.
Right now it never stays in one spot, every time I giggle the mouse a bit it moves the bones randomly counterclockwise.
I was wondering if there was something bluntly obviously wrong that could point me in the right direction for debugging.
void inverseKinematics(float targetX, float targetY, skl::Bone* targetBone)
{
std::string stopBone = "Pelvis";
//===
// Track the end effector position (the final bone)
double endX = targetBone->getFrameX();
double endY = targetBone->getFrameY();
//===
// Perform CCD on the bones by optimizing each bone in a loop
// from the final bone to the root bone
bool modifiedBones = false;
targetBone = targetBone->getParent();
while(targetBone->getName() != stopBone)
{
// Get the vector from the current bone to the end effector position.
double curToEndX = endX - targetBone->getFrameX();
double curToEndY = endY - targetBone->getFrameY();
double curToEndMag = sqrt( curToEndX*curToEndX + curToEndY*curToEndY );
// Get the vector from the current bone to the target position.
double curToTargetX = targetX - targetBone->getFrameX();
double curToTargetY = targetY - targetBone->getFrameY();
double curToTargetMag = sqrt( curToTargetX*curToTargetX
+ curToTargetY*curToTargetY );
// Get rotation to place the end effector on the line from the current
// joint position to the target position.
double cosRotAng;
double sinRotAng;
double endTargetMag = (curToEndMag*curToTargetMag);
if( endTargetMag <= 0.1f )
{
cosRotAng = 1.0f;
sinRotAng = 0.0f;
}
else
{
cosRotAng = (curToEndX*curToTargetX + curToEndY*curToTargetY) / endTargetMag;
sinRotAng = (curToEndX*curToTargetY - curToEndY*curToTargetX) / endTargetMag;
}
// Clamp the cosine into range when computing the angle (might be out of range
// due to floating point error).
double rotAng = acosf( max(-1.0f, min(1.0f,cosRotAng) ) );
if( sinRotAng < 0.0f )
rotAng = -rotAng;
// Rotate the end effector position.
endX = targetBone->getFrameX() + cosRotAng*curToEndX - sinRotAng*curToEndY;
endY = targetBone->getFrameY() + sinRotAng*curToEndX + cosRotAng*curToEndY;
// Rotate the current bone in local space (this value is output to the user)
targetBone->setAngle(SimplifyAngle(targetBone->getAngle() + rotAng));
// Check for termination
double endToTargetX = (targetX-endX);
double endToTargetY = (targetY-endY);
if( endToTargetX*endToTargetX + endToTargetY*endToTargetY <= 1.0f )
{
// We found a valid solution.
return;
}
// Track if the arc length that we moved the end effector was
// a nontrivial distance.
if( !modifiedBones && fabs(rotAng)*curToEndMag > 0.0001f )
{
modifiedBones = true;
}
targetBone = targetBone->getParent();
}
Thanks
No, there is nothing obviously wrong in the program listing you have given. You are correctly computing the change of angle rotAng and the new position (endX, endY) of the end-effector.
You can compute rotAng more simply as
double rotAng =
atan2(curToTargetY, curToTargetX) - atan2(curToEndY, curToEndX);
which gives identical results (assuming the vectors are non-zero).
I suspect the error is somewhere outside of the program listing you have given. Maybe there is a discrepancy between the forward kinematics assumed in inverseKinematics() and the actual forward kinematics used in the display routines and elsewhere. Try recomputing the forward kinematics at the end of the procedure to see if the rest of the system agrees that the end-effector is at (endX, endY).
Related
So I currently am trying to create some method which when taking in a simulation vehicles position, direction, and an objects position, Will determine whether or not the object lies on the right and side or left hand side of that vehicles direction. An image will be shown here,Simple Diagram of Problem Situation
So far I have tried to use the cross product and some other methods to solve the problem i will include relevant code blocks here:
void Class::sortCones()
{
// Clearing both _lhsCones and _rhsCones vectors
_rhsCones.clear();
_lhsCones.clear();
for (int i =0; i < _cones.size(); i++)
{
if (indicateSide(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) > 0)
{
_lhsCones.push_back(_cones[i]);
}
if (indicateSide(_x, _y, _cones[i].x(), _cones[i].y(), _yaw) == 0)
{
return;
}
else
{
_rhsCones.push_back(_cones[i]);
}
}
return;
}
double Class::indicateSide(double xCar, double yCar, double xCone, double yCone, double yawCar)
{
// Compute the i and j compoents of the yaw measurment as a unit vector i.e Vector Mag = 1
double iOne = cos(yawCar);
double jOne = sin(yawCar);
// Create the Car to Cone Vector
double iTwo = xCone - xCar;
double jTwo = yCone - yCar;
//ensure to normalise the vCar to Cone Vector
double magTwo = std::sqrt(std::pow(iTwo, 2) + std::pow(jTwo, 2));
iTwo = iTwo / magTwo;
jTwo = jTwo / magTwo;
// - old method
// Using the transformation Matrix with Theta = yaw (angle in radians) transform the axis to the augmented 2D space
// Take the Cross Product of < Ex, 0 > x < x', y' > where x', y' have the same location in the simulation space.
// double Ex = cos(yawCar)*iOne - sin(yawCar)*jOne;
// double Ey = sin(yawCar)*iOne + cos(yawCar)*jOne;
double result = iOne*jTwo - jOne*iTwo;
return result;
}
The car currently just seems to run off in a straight line and one of the funny elements is the sorting method of left and right any direction is GREATLY appreciated.
I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong
I've been working on detecting collision between to object in my game. Right now everything tavels vertically, but would like to keep the option for other movement open. It's classic 2d vertical space shooter.
Right now I loop through every object, checking for collisions:
for(std::list<Object*>::iterator iter = mObjectList.begin(); iter != mObjectList.end();) {
Object *m = (*iter);
for(std::list<Object*>::iterator innerIter = ++iter; innerIter != mObjectList.end(); innerIter++ ) {
Object *s = (*innerIter);
if(m->getType() == s->getType()) {
break;
}
if(m->checkCollision(s)) {
m->onCollision(s);
s->onCollision(m);
}
}
}
Here is how I check for a collision:
bool checkCollision(Object *other) {
float radius = mDiameter / 2.f;
float theirRadius = other->getDiameter() / 2.f;
Vector<float> ourMidPoint = getAbsoluteMidPoint();
Vector<float> theirMidPoint = other->getAbsoluteMidPoint();
// If the other object is in between our path on the y axis
if(std::min(getAbsoluteMidPoint().y - radius, getPreviousAbsoluteMidPoint().y - radius) <= theirMidPoint.y &&
theirMidPoint.y <= std::max(getAbsoluteMidPoint().y + radius, getPreviousAbsoluteMidPoint().y + radius)) {
// Get the distance between the midpoints on the x axis
float xd = abs(ourMidPoint.x - theirMidPoint.x);
// If the distance between the two midpoints
// is greater than both of their radii together
// then they are too far away to collide
if(xd > radius+theirRadius) {
return false;
} else {
return true;
}
}
return false;
}
The problem is it will randomly detect collisions correctly, but other times does not detect it at all. It's not the if statement breaking away from the object loop because the objects do have different types. The closer the object is to the top of the screen, the better chance it has of collision getting detected correctly. Closer to the bottom of the screen, the less chance it has of getting detected correctly or even at all. However, these situations don't always occur. The diameter for the objects are massive (10 and 20) to see if that was the problem, but it doesn't help much at all.
EDIT - Updated Code
bool checkCollision(Object *other) {
float radius = mDiameter / 2.f;
float theirRadius = other->getDiameter() / 2.f;
Vector<float> ourMidPoint = getAbsoluteMidPoint();
Vector<float> theirMidPoint = other->getAbsoluteMidPoint();
// Find the distance between the two points from the center of the object
float a = theirMidPoint.x - ourMidPoint.x;
float b = theirMidPoint.y - ourMidPoint.y;
// Find the hypotenues
double c = (a*a)+(b*b);
double radii = pow(radius+theirRadius, 2.f);
// If the distance between the points is less than or equal to the radius
// then the circles intersect
if(c <= radii*radii) {
return true;
} else {
return false;
}
}
Two circular objects collide when the distance between their centers is small enough. You can use the following code to check this:
double distanceSquared =
pow(ourMidPoint.x - theirMidPoint.x, 2.0) +
pow(ourMidPoint.x - theirMidPoint.x, 2.0);
bool haveCollided = (distanceSquared <= pow(radius + theirRadius, 2.0));
In order to check whether there was a collision between two points in time, you can check for collision at the start of the time interval and at the end of it; however, if the objects move very fast, the collision detection can fail (i guess you have encountered this problem for falling objects that have the fastest speed at the bottom of the screen).
The following might make the collision detection more reliable (though still not perfect). Suppose the objects move with constant speed; then, their position is a linear function of time:
our_x(t) = our_x0 + our_vx * t;
our_y(t) = our_y0 + our_vy * t;
their_x(t) = their_x0 + their_vx * t;
their_y(t) = their_y0 + their_vy * t;
Now you can define the (squared) distance between them as a quadratic function of time. Find at which time it assumes its minimum value (i.e. its derivative is 0); if this time belongs to current time interval, calculate the minimum value and check it for collision.
This must be enough to detect collisions almost perfectly; if your application works heavily with free-falling objects, you might want to refine the movement functions to be quadratic:
our_x(t) = our_x0 + our_v0x * t;
our_y(t) = our_y0 + our_v0y * t + g/2 * t^2;
This logic is wrong:
if(std::min(getAbsoluteMidPoint().y - radius, getPreviousAbsoluteMidPoint().y - radius) <= theirMidPoint.y &&
theirMidPoint.y <= std::max(getAbsoluteMidPoint().y + radius, getPreviousAbsoluteMidPoint().y + radius))
{
// then a collision is possible, check x
}
(The logic inside the braces is wrong too, but that should produce false positives, not false negatives.) Checking whether a collision has occurred during a time interval can be tricky; I'd suggest checking for a collision at the present time, and getting that to work first. When you check for a collision (now) you can't check x and y independently, you must look at the distance between the object centers.
EDIT:
The edited code is still not quite right.
// Find the hypotenues
double c = (a*a)+(b*b); // actual hypotenuse squared
double radii = pow(radius+theirRadius, 2.f); // critical hypotenuse squared
if(c <= radii*radii) { // now you compare a distance^2 to a distance^4
return true; // collision
}
It should be either this:
double c2 = (a*a)+(b*b); // actual hypotenuse squared
double r2 = pow(radius+theirRadius, 2.f); // critical hypotenuse squared
if(c2 <= r2) {
return true; // collision
}
or this:
double c2 = (a*a)+(b*b); // actual hypotenuse squared
double c = pow(c2, 0.5); // actual hypotenuse
double r = radius + theirRadius; // critical hypotenuse
if(c <= r) {
return true; // collision
}
Your inner loop needs to start at mObjectList.begin() instead of iter.
The inner loop needs to iterate over the entire list otherwise you miss collision candidates the further you progress in the outer loop.
I have created an open plane area with thin cylinders on it like pucks, they bounce around the area and have collision detection for some larger cylinders also placed on the plane. I am trying to get them to now head towards a set point on the plane using a steering method.
The steering works for works for avoiding the obstacles by calculating distance from obstacle then calculating angle between direction travelling and the direction of the obstacle, using the calculation of the distance from obstacle when the puck is too close it steers left or right based on the calculated angle. The same technique reversed fails to work for steering towards a point, I have tried using both acos and atan2 to calculate the angle between direction travelling and target direction and from outputs believe this bit is right but when I try to use that calculation to determine when to steer towards the target I get unexpected results. Sometimes random turning?
#include "Puck.h"
#include <iostream>
#include <fstream>
using namespace std;
#include <math.h>
ofstream fout("danna.txt");
#ifndef M_PI
#define M_PI 3.1415
#endif
class TranslateCB : public osg::NodeCallback
{
public:
TranslateCB() : _dx( 0. ), _dy( 0. ), _dirx(1), _diry(0), _inc(0.1), _theta(0) {}
TranslateCB(Puck** pp, Obstacle** ob, int count, double r, double x, double y) : _dx( 0. ), _dy( 0. ),
_dirx(2.0*rand()/RAND_MAX-1), _diry(2.0*rand()/RAND_MAX-1), _inc(0.3), _theta(0)
{
obstacles = ob;
ob_count = count;
_radius = r;
_x = x;
_y = y;
puckH = pp;
}
virtual void operator()( osg::Node* node,osg::NodeVisitor* nv )
{
osg::MatrixTransform* mt =
dynamic_cast<osg::MatrixTransform*>( node );
osg::Matrix mR, mT;
mT.makeTranslate( _dx , _dy, 0. );
mt->setMatrix( mT );
double ob_dirx;
double ob_diry;
double ob_dist;
double centerX=0, centerY =0;
_theta = 0;
double min = 4;
// location that I am trying to get the pucks to head towards
centerX = 1;
centerY = 5;
double tDirx = (_x+_dx) - centerX;
double tDiry = (_y+_dy) - centerY;
double tDist = sqrt(tDirx*tDirx+tDiry*tDiry); //distance to target location
// normalizing my target direction
tDirx = tDirx/tDist;
tDiry = tDiry/tDist;
double hDist = sqrt(_dirx*_dirx + _diry*_diry); //distance to next heading
_dirx= _dirx/hDist;
_diry= _diry/hDist;
double cAngle = acos(_dirx*tDirx+_diry*tDiry); //using inverse of cos to calculate angle between directions
double tAngle = atan2(centerY - (_y+_dy),centerX - (_x+_dx)); // using inverse of tan to calculate angle between directions
double tMin = tDist*sin(cAngle);
//if statement used to define when to apply steering direction
if(tMin > 3)
{
if(tDist < 1){ _theta = 0; } //puck is inside target location, so keep travelling straight
if(cAngle > M_PI/2){ _theta = -0.1; } //turn left
else{ _theta = 0.1; } //turn right
}
else{ _theta = 0; }
////// The collision detection for the obstacles that works on the same princables that I am using above
for(int i = 0; i < ob_count; i++)
{
ob_dirx = (_x+_dx) - obstacles[i]->x;
ob_diry = (_y+_dy) - obstacles[i]->y;
ob_dist = sqrt(ob_dirx*ob_dirx+ob_diry*ob_diry);
if (ob_dist < 3) {
//normalise directions
double ob_norm = sqrt(ob_dirx*ob_dirx+ob_diry*ob_diry);
ob_dirx = (ob_dirx)/ob_norm;
ob_diry = (ob_diry)/ob_norm;
double norm = sqrt(_dirx*_dirx+_diry*_diry);
_dirx = (_dirx)/norm;
_diry = (_diry)/norm;
//calculate angle between direction travelling, and direction to obstacle
double angle = acos(_dirx*ob_dirx + _diry*ob_diry);
//calculate closest distance between puck and obstacle if continues on same path
double min_dist = ob_dist*sin(angle);
if(min_dist < _radius + obstacles[i]->radius && ob_dist < min+obstacles[i]->radius)
{
min = ob_dist;
if(ob_dist < _radius + obstacles[i]->radius){ _theta = 0; }
else if(angle <= M_PI/2){ _theta = -0.3; }
else{ _theta = 0.3; }
}
}
}
//change direction accordingly
_dirx = _dirx*cos(_theta) + _diry*sin(_theta);
_diry = _diry*cos(_theta) - _dirx*sin(_theta);
_dx += _inc*_dirx;
if((_x+_dx > 20 && _dirx > 0) || (_x+_dx < -20 && _dirx < 0))
{
_dirx = -_dirx;
_diry += (0.2*rand()/RAND_MAX-0.1); //add randomness to bounce
}
_dy += _inc*_diry;
if((_y+_dy > 20 && _diry > 0) || (_y+_dy < -20 && _diry < 0))
{
_diry = -_diry;
_dirx += (0.2*rand()/RAND_MAX-0.1); //add randomness to bounce
}
traverse( node, nv );
}
private:
double _dx,_dy;
double _dirx,_diry;
double _inc;
double _theta;
double _radius;
double _x,_y;
Obstacle** obstacles;
Puck** puckH;
int ob_count;
};
Puck::Puck()
{
}
void Puck::createBoids (Puck** pucks, Group *root, Obstacle** obstacles, int count, double xx, double yy)
{
// geometry
radius = 0.2;
x = xx;
y = yy;
ob_count = count;
Cylinder *shape=new Cylinder(Vec3(x,y,0),radius,0.1);
ShapeDrawable *draw=new ShapeDrawable(shape);
draw->setColor(Vec4(1,0,0,1));
Geode *geode=new Geode();
geode->addDrawable(draw);
// transformation
MatrixTransform *T=new MatrixTransform();
TranslateCB *tcb = new TranslateCB(pucks, obstacles,ob_count,radius,x,y);
T->setUpdateCallback(tcb);
T->addChild(geode);
root->addChild(T);
}
any help would be amazing!
The problem here is that the technique that "works" for avoiding obstacles will always occur when the puck is heading towards the obstacle. This special condition makes both the direction of the puck and the direction of the obstacle in adjacent quadrants.
When attempting to make the pucks steer towards the obstacle however, the technique breaks down because the puck most likely will be heading away from the obstacle, no longer having the condition that the target and direction vectors are in adjacent quadrants.
The correct way to determine the steering direction is to rotate the target vector by an angle that would make the the direction vector point straight up in the quadrants (0, 1). Now that the target vector is relative to the direction vector (0, 1) looking at the x component of the target vector will determine the steering direction. If the x component of the target vector is negative, the puck must turn left to steer towards the target (increase the angle). If the x component of the target vector is positive, the puck must turn right to steer towards the target (decrease the angle).
Consider the following snippet written in python to verify this, it should still be easy to read for you to grasp the concept:
from math import *
dirX = 0.0
dirY = 0.0
targX = 1.0
targY = 0.0
def dir():
global dirX, dirY, targX, targY
# get magnitiude of direction
mag1 = sqrt(dirX*dirX + dirY*dirY)
if mag1 != 0:
# normalize direction vector
normX = dirX / mag1
normY = dirY / mag1
# get magnitude of target vector
mag2 = sqrt(targX*targX + targY*targY)
if mag2 != 0:
# normalize target vector
targX = targX / mag2
targY = targY / mag2
# find the angle need to rotate the dir vector to (0, 1)
rotateAngle = (pi/2.0) - atan2(normY, normX)
# rotate targ vector by that angle (we only care about the x component)
relTargX = cos(rotateAngle) * normX + sin(rotateAngle) * normY
# if the target vector's x is negative
if relTargX < 0:
# turn left
print "Left!"
# otherwise the target vector is 0 or positive
else:
# turn right
print "Right!"
def out():
global dirX, dirY, targX, targY
# function just prints values to the screen
print "dir(%f, %f) targ(%f, %f)" % (dirX, dirY, targX, targY)
# for values 0 to 360
for i in range(360):
# pretend this is the pucks direction
dirX = sin(radians(i))
dirY = cos(radians(i))
# print dir and target vectors to screen
out()
# print the direction to turn
dir()
I suppose I could've written this in C++, but compared to running a python prompt it's a royal pain. It is as readable as any pseudo code I could've written and the concepts will work regardless of language.
I'm representing a shape as a set of coordinates in 3D, I'm trying to rotate the whole object around an axis (In this case the Z axis, but I'd like to rotate around all three once I get it working).
I've written some code to do this using a rotation matrix:
//Coord is a 3D vector of floats
//pos is a coordinate
//angles is a 3d vector, each component is the angle of rotation around the component axis
//in radians
Coord<float> Polymers::rotateByMatrix(Coord<float> pos, const Coord<float> &angles)
{
float xrot = angles[0];
float yrot = angles[1];
float zrot = angles[2];
//z axis rotation
pos[0] = (cosf(zrot) * pos[0] - (sinf(zrot) * pos[1]));
pos[1] = (sinf(zrot) * pos[0] + cosf(zrot) * pos[1]);
return pos;
}
The image below shows the object I'm trying to rotate (looking down the Z axis) before the rotation is attempted, each small sphere indicates one of the coordinates I'm trying to rotate
alt text http://www.cs.nott.ac.uk/~jqs/notsquashed.png
The rotation is performed for the object by the following code:
//loop over each coordinate in the object
for (int k=start; k<finish; ++k)
{
Coord<float> pos = mp[k-start];
//move object away from origin to test rotation around origin
pos += Coord<float>(5.0,5.0,5.0);
pos = rotateByMatrix(pos, rots);
//wrap particle position
//these bits of code just wrap the coordinates around if the are
//outside of the volume, and write the results to the positions
//array and so shouldn't affect the rotation.
for (int l=0; l<3; ++l)
{
//wrap to ensure torroidal space
if (pos[l] < origin[l]) pos[l] += dims[l];
if (pos[l] >= (origin[l] + dims[l])) pos[l] -= dims[l];
parts->m_hPos[k * 4 + l] = pos[l];
}
}
The problem is that when I perform the rotation in this way, with the angles parameter set to (0.0,0.0,1.0) it works (sort of), but the object gets deformed, like so:
alt text http://www.cs.nott.ac.uk/~jqs/squashed.png
which is not what I want. Can anyone tell me what I'm doing wrong and how I can rotate the entire object around the axis without deforming it?
Thanks
nodlams
Where you do your rotation in rotateByMatrix, you compute the new pos[0], but then feed that into the next line for computing the new pos[1]. So the pos[0] you're using to compute the new pos[1] is not the input, but the output. Store the result in a temp var and return that.
Coord<float> tmp;
tmp[0] = (cosf(zrot) * pos[0] - (sinf(zrot) * pos[1]));
tmp[1] = (sinf(zrot) * pos[0] + cosf(zrot) * pos[1]);
return tmp;
Also, pass the pos into the function as a const reference.
const Coord<float> &pos
Plus you should compute the sin and cos values once, store them in temporaries and reuse them.