Face Recognition using webcam - c++

I have been using OpenCv Libraries for awhile , used and understood a code that detects [Only one face] and recognize it the main functions are there ..
this is the function of recognition
// Find the most likely person based on a detection. Returns the index, and stores the confidence value into pConfidence.
int findNearestNeighbor(float * projectedTestFace, float *pConfidence)
{
double leastDistSq = DBL_MAX;
int i, iTrain, iNearest = 0;
for(iTrain=0; iTrain<nTrainFaces; iTrain++)
{
double distSq=0;
for(i=0; i<nEigens; i++)
{
float d_i = projectedTestFace[i] - projectedTrainFaceMat->data.fl[iTrain*nEigens + i];
#ifdef USE_MAHALANOBIS_DISTANCE
distSq += d_i*d_i / eigenValMat->data.fl[i]; // Mahalanobis distance (might give better results than Eucalidean distance)
#else
distSq += d_i*d_i; // Euclidean distance.
#endif
}
if(distSq < leastDistSq)
{
leastDistSq = distSq;
iNearest = iTrain;
}
}
// Return the confidence level based on the Euclidean distance,
// so that similar images should give a confidence between 0.5 to 1.0,
// and very different images should give a confidence between 0.0 to 0.5.
*pConfidence = 1.0f - sqrt( leastDistSq / (float)(nTrainFaces * nEigens) ) / 255.0f;
// Return the found index.
return iNearest;
}
after that it would return the value of iNearest to the follow code
// Check which person it is most likely to be.
iNearest = findNearestNeighbor(projectedTestFace, &confidence);
nearest = trainPersonNumMat->data.i[iNearest];
printf("Most likely person in camera: '%s' (confidence=%f).\n", personNames[nearest-1].c_str(), confidence);
how can I let it recognize the unknown person as unknown and stop comparing to with the faces in my db ?
also how can i detect multi-faces at once?.

Related

Applying a peak detection algorithm to a realtime data

I have a function to detect the peak of real-time data. The algorithm is mentioned in this thread. which looks like this:
std::vector<int> smoothedZScore(std::vector<float> input)
{
//lag 5 for the smoothing functions
int lag = 5;
//3.5 standard deviations for signal
float threshold = 3.5;
//between 0 and 1, where 1 is normal influence, 0.5 is half
float influence = .5;
if (input.size() <= lag + 2)
{
std::vector<int> emptyVec;
return emptyVec;
}
//Initialise variables
std::vector<int> signal(input.size(), 0.0);
std::vector<float> filteredY(input.size(), 0.0);
std::vector<float> avgFilter(input.size(), 0.0);
std::vector<float> stdFilter(input.size(), 0.0);
std::vector<float> subVecStart(input.begin(), input.begin() + lag);
double sum = std::accumulate(std::begin(subVecStart), std::end(subVecStart), 0.0);
double mean = sum / subVecStart.size();
double accum = 0.0;
std::for_each (std::begin(subVecStart), std::end(subVecStart), [&](const double d) {
accum += (d - mean) * (d - mean);
});
double stdev = sqrt(accum / (subVecStart.size()-1));
//avgFilter[lag] = mean(subVecStart);
avgFilter[lag] = mean;
//stdFilter[lag] = stdDev(subVecStart);
stdFilter[lag] = stdev;
for (size_t i = lag + 1; i < input.size(); i++)
{
if (std::abs(input[i] - avgFilter[i - 1]) > threshold * stdFilter[i - 1])
{
if (input[i] > avgFilter[i - 1])
{
signal[i] = 1; //# Positive signal
}
else
{
signal[i] = -1; //# Negative signal
}
//Make influence lower
filteredY[i] = influence* input[i] + (1 - influence) * filteredY[i - 1];
}
else
{
signal[i] = 0; //# No signal
filteredY[i] = input[i];
}
//Adjust the filters
std::vector<float> subVec(filteredY.begin() + i - lag, filteredY.begin() + i);
// avgFilter[i] = mean(subVec);
// stdFilter[i] = stdDev(subVec);
}
return signal;
}
In my code, I'm reading real-time 3 axis accelerometer values from IMU sensor and displaying it as a graph. I need to detect the peak of the signal using the above algorithm. I added the function to my code.
Let's say the realtime valuees are following:
double x = sample->acceleration_g[0];
double y = sample->acceleration_g[1];
double z = sample->acceleration_g[2];
How do I pass this value to the above function and detect the peak.
I tried calling this:
smoothedZScore(x)
but gives me an error:
settings.cpp:230:40: error: no matching function for call to 'smoothedZScore'
settings.cpp:92:18: note: candidate function not viable: no known conversion from 'double' to 'std::vector<float>' for 1st argument
EDIT
The algorithm needs a minimum of 7 samples to feed in. So I guess I may need to store my realtime data in a buffer.
But I've difficulty understanding how to store samples in a buffer and apply to the peak detection algorithm.
can you show me a possible solution to this?
You will need to rewrite the algorithm. Your problem isn't just a realtime problem, you also need a causal solution. The function you have is not causal.
Practically speaking, you will need a class, and that class will need to incrementally calculate the standard deviation.

how to implement a c++ function which creates a swirl on an image

imageData = new double*[imageHeight];
for(int i = 0; i < imageHeight; i++) {
imageData[i] = new double[imageWidth];
for(int j = 0; j < imageWidth; j++) {
// compute the distance and angle from the swirl center:
double pixelX = (double)i - swirlCenterX;
double pixelY = (double)j - swirlCenterY;
double pixelDistance = pow(pow(pixelX, 2) + pow(pixelY, 2), 0.5);
double pixelAngle = atan2(pixelX, pixelY);
// double swirlAmount = 1.0 - (pixelDistance/swirlRadius);
// if(swirlAmount > 0.0) {
// double twistAngle = swirlTwists * swirlAmount * PI * 2.0;
double twistAngle = swirlTwists * pixelDistance * PI * 2.0;
// adjust the pixel angle and compute the adjusted pixel co-ordinates:
pixelAngle += twistAngle;
pixelX = cos(pixelAngle) * pixelDistance;
pixelY = sin(pixelAngle) * pixelDistance;
// }
(this)->setPixel(i, j, tempMatrix[(int)(swirlCenterX + pixelX)][(int)(swirlCenterY + pixelY)]);
}
}
I am trying to implement a c++ function (code above) based on the following pseudo-code
which is supposed to create a swirl on an image, but I have some continuity problems on the borders.
The function I have for the moment is able to apply the swirl on a disk of a given size and to deform it almost as I whished but its influence doesn't decrease as we get close to the borders. I tried to multiply the angle of rotation by a 1 - (r/R) factor (with r the distance between the current pixel in the function and the center of the swirl, and R the radius of the swirl), but this doesn't give the effect I hoped for.
Moreover, I noticed that at some parts of the border, a thin white line appears (which means that the values of the pixels there is equal to 1) and I can't exactly explain why.
Maybe some of the problems I have are linked to the atan2 C++ standard function.

Impact of cubic and catmull splines on image

I am trying to implement some function like below
For this I am trying to use Cubic interpolation and Catmull interpolation ( check both separately to compare the best result) , what i am not understanding is what impact these interpolation show on image and how we can get these points values where we clicked to set that curve ? and do we need to define the function these black points on the image separately ?
I am getting help from these resources
Source 1
Source 2
Approx the same focus
Edit
int main (int argc, const char** argv)
{
Mat input = imread ("E:\\img2.jpg");
for(int i=0 ; i<input.rows ; i++)
{
for (int p=0;p<input.cols;p++)
{
//for(int t=0; t<input.channels(); t++)
//{
input.at<cv::Vec3b>(i,p)[0] = 255*correction(input.at<cv::Vec3b>(i,p)[0]/255.0,ctrl,N); //B
input.at<cv::Vec3b>(i,p)[1] = 255*correction(input.at<cv::Vec3b>(i,p)[1]/255.0,ctrl,N); //G
input.at<cv::Vec3b>(i,p)[2] = 255*correction(input.at<cv::Vec3b>(i,p)[2]/255.0,ctrl,N); //R
//}
}
}
imshow("image" , input);
waitKey();
}
So if your control points are always on the same x coordinate
and linearly dispersed along whole range then you can do it like this:
//---------------------------------------------------------------------------
const int N=5; // number of control points (must be >= 4)
float ctrl[N]= // control points y values initiated with linear function y=x
{ // x value is index*1.0/(N-1)
0.00,
0.25,
0.50,
0.75,
1.00,
};
//---------------------------------------------------------------------------
float correction(float col,float *ctrl,int n)
{
float di=1.0/float(n-1);
int i0,i1,i2,i3;
float t,tt,ttt;
float a0,a1,a2,a3,d1,d2;
// find start control point
col*=float(n-1);
i1=col; col-=i1;
i0=i1-1; if (i0< 0) i0=0;
i2=i1+1; if (i2>=n) i2=n-1;
i3=i1+2; if (i3>=n) i3=n-1;
// compute interpolation coefficients
d1=0.5*(ctrl[i2]-ctrl[i0]);
d2=0.5*(ctrl[i3]-ctrl[i1]);
a0=ctrl[i1];
a1=d1;
a2=(3.0*(ctrl[i2]-ctrl[i1]))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-ctrl[i2]+ctrl[i1]));
// now interpolate new colro intensity
t=col; tt=t*t; ttt=tt*t;
t=a0+(a1*t)+(a2*tt)+(a3*ttt);
return t;
}
//---------------------------------------------------------------------------
It uses 4-point 1D interpolation cubic (from that link in my comment above) to get new color just do this:
new_col = correction(old_col,ctrl,N);
this is how it looks:
the green arrows shows derivation error (always only on start and end point of whole curve). It can be corrected by adding 2 more control points one before and one after all others ...
[Notes]
color range is < 0.0 , 1.0 > so if you need other then just multiply the result and divide the input ...
[edit1] the start/end derivations fixed a little
float correction(float col,float *ctrl,int n)
{
float di=1.0/float(n-1);
int i0,i1,i2,i3;
float t,tt,ttt;
float a0,a1,a2,a3,d1,d2;
// find start control point
col*=float(n-1);
i1=col; col-=i1;
i0=i1-1;
i2=i1+1; if (i2>=n) i2=n-1;
i3=i1+2;
// compute interpolation coefficients
if (i0>=0) d1=0.5*(ctrl[i2]-ctrl[i0]); else d1=ctrl[i2]-ctrl[i1];
if (i3< n) d2=0.5*(ctrl[i3]-ctrl[i1]); else d2=ctrl[i2]-ctrl[i1];
a0=ctrl[i1];
a1=d1;
a2=(3.0*(ctrl[i2]-ctrl[i1]))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-ctrl[i2]+ctrl[i1]));
// now interpolate new colro intensity
t=col; tt=t*t; ttt=tt*t;
t=a0+(a1*t)+(a2*tt)+(a3*ttt);
return t;
}
[edit2] just some clarification on the coefficients
they are all derived from this conditions:
y(t) = a0 + a1*t + a2*t*t + a3*t*t*t // direct value
y'(t) = a1 + 2*a2*t + 3*a3*t*t // first derivation
now you have points y0,y1,y2,y3 so I chose that y(0)=y1 and y(1)=y2 which gives c0 continuity (value is the same in the joint points between curves)
now I need c1 continuity so i add y'(0) must be the same as y'(1) from previous curve.
for y'(0) I choose avg direction between points y0,y1,y2
for y'(1) I choose avg direction between points y1,y2,y3
These are the same for the next/previous segments so it is enough. Now put it all together:
y(0) = y0 = a0 + a1*0 + a2*0*0 + a3*0*0*0
y(1) = y1 = a0 + a1*1 + a2*1*1 + a3*1*1*1
y'(0) = 0.5*(y2-y0) = a1 + 2*a2*0 + 3*a3*0*0
y'(1) = 0.5*(y3-y1) = a1 + 2*a2*1 + 3*a3*1*1
And solve this system of equtions (a0,a1,a2,a3 = ?). You will get what I have in source code above. If you need different properties of the curve then just make different equations ...
[edit3] usage
pic1=pic0; // copy source image to destination pic is mine image class ...
for (y=0;y<pic1.ys;y++) // go through all pixels
for (x=0;x<pic1.xs;x++)
{
float i;
// read, convert, write pixel
i=pic1.p[y][x].db[0]; i=255.0*correction(i/255.0,red control points,5); pic1.p[y][x].db[0]=i;
i=pic1.p[y][x].db[1]; i=255.0*correction(i/255.0,green control points,5); pic1.p[y][x].db[1]=i;
i=pic1.p[y][x].db[2]; i=255.0*correction(i/255.0,blue control points,5); pic1.p[y][x].db[2]=i;
}
On top there are control points per R,G,B. On bottom left is original image and on bottom right is corrected image.

Sporadic Collision Detection

I've been working on detecting collision between to object in my game. Right now everything tavels vertically, but would like to keep the option for other movement open. It's classic 2d vertical space shooter.
Right now I loop through every object, checking for collisions:
for(std::list<Object*>::iterator iter = mObjectList.begin(); iter != mObjectList.end();) {
Object *m = (*iter);
for(std::list<Object*>::iterator innerIter = ++iter; innerIter != mObjectList.end(); innerIter++ ) {
Object *s = (*innerIter);
if(m->getType() == s->getType()) {
break;
}
if(m->checkCollision(s)) {
m->onCollision(s);
s->onCollision(m);
}
}
}
Here is how I check for a collision:
bool checkCollision(Object *other) {
float radius = mDiameter / 2.f;
float theirRadius = other->getDiameter() / 2.f;
Vector<float> ourMidPoint = getAbsoluteMidPoint();
Vector<float> theirMidPoint = other->getAbsoluteMidPoint();
// If the other object is in between our path on the y axis
if(std::min(getAbsoluteMidPoint().y - radius, getPreviousAbsoluteMidPoint().y - radius) <= theirMidPoint.y &&
theirMidPoint.y <= std::max(getAbsoluteMidPoint().y + radius, getPreviousAbsoluteMidPoint().y + radius)) {
// Get the distance between the midpoints on the x axis
float xd = abs(ourMidPoint.x - theirMidPoint.x);
// If the distance between the two midpoints
// is greater than both of their radii together
// then they are too far away to collide
if(xd > radius+theirRadius) {
return false;
} else {
return true;
}
}
return false;
}
The problem is it will randomly detect collisions correctly, but other times does not detect it at all. It's not the if statement breaking away from the object loop because the objects do have different types. The closer the object is to the top of the screen, the better chance it has of collision getting detected correctly. Closer to the bottom of the screen, the less chance it has of getting detected correctly or even at all. However, these situations don't always occur. The diameter for the objects are massive (10 and 20) to see if that was the problem, but it doesn't help much at all.
EDIT - Updated Code
bool checkCollision(Object *other) {
float radius = mDiameter / 2.f;
float theirRadius = other->getDiameter() / 2.f;
Vector<float> ourMidPoint = getAbsoluteMidPoint();
Vector<float> theirMidPoint = other->getAbsoluteMidPoint();
// Find the distance between the two points from the center of the object
float a = theirMidPoint.x - ourMidPoint.x;
float b = theirMidPoint.y - ourMidPoint.y;
// Find the hypotenues
double c = (a*a)+(b*b);
double radii = pow(radius+theirRadius, 2.f);
// If the distance between the points is less than or equal to the radius
// then the circles intersect
if(c <= radii*radii) {
return true;
} else {
return false;
}
}
Two circular objects collide when the distance between their centers is small enough. You can use the following code to check this:
double distanceSquared =
pow(ourMidPoint.x - theirMidPoint.x, 2.0) +
pow(ourMidPoint.x - theirMidPoint.x, 2.0);
bool haveCollided = (distanceSquared <= pow(radius + theirRadius, 2.0));
In order to check whether there was a collision between two points in time, you can check for collision at the start of the time interval and at the end of it; however, if the objects move very fast, the collision detection can fail (i guess you have encountered this problem for falling objects that have the fastest speed at the bottom of the screen).
The following might make the collision detection more reliable (though still not perfect). Suppose the objects move with constant speed; then, their position is a linear function of time:
our_x(t) = our_x0 + our_vx * t;
our_y(t) = our_y0 + our_vy * t;
their_x(t) = their_x0 + their_vx * t;
their_y(t) = their_y0 + their_vy * t;
Now you can define the (squared) distance between them as a quadratic function of time. Find at which time it assumes its minimum value (i.e. its derivative is 0); if this time belongs to current time interval, calculate the minimum value and check it for collision.
This must be enough to detect collisions almost perfectly; if your application works heavily with free-falling objects, you might want to refine the movement functions to be quadratic:
our_x(t) = our_x0 + our_v0x * t;
our_y(t) = our_y0 + our_v0y * t + g/2 * t^2;
This logic is wrong:
if(std::min(getAbsoluteMidPoint().y - radius, getPreviousAbsoluteMidPoint().y - radius) <= theirMidPoint.y &&
theirMidPoint.y <= std::max(getAbsoluteMidPoint().y + radius, getPreviousAbsoluteMidPoint().y + radius))
{
// then a collision is possible, check x
}
(The logic inside the braces is wrong too, but that should produce false positives, not false negatives.) Checking whether a collision has occurred during a time interval can be tricky; I'd suggest checking for a collision at the present time, and getting that to work first. When you check for a collision (now) you can't check x and y independently, you must look at the distance between the object centers.
EDIT:
The edited code is still not quite right.
// Find the hypotenues
double c = (a*a)+(b*b); // actual hypotenuse squared
double radii = pow(radius+theirRadius, 2.f); // critical hypotenuse squared
if(c <= radii*radii) { // now you compare a distance^2 to a distance^4
return true; // collision
}
It should be either this:
double c2 = (a*a)+(b*b); // actual hypotenuse squared
double r2 = pow(radius+theirRadius, 2.f); // critical hypotenuse squared
if(c2 <= r2) {
return true; // collision
}
or this:
double c2 = (a*a)+(b*b); // actual hypotenuse squared
double c = pow(c2, 0.5); // actual hypotenuse
double r = radius + theirRadius; // critical hypotenuse
if(c <= r) {
return true; // collision
}
Your inner loop needs to start at mObjectList.begin() instead of iter.
The inner loop needs to iterate over the entire list otherwise you miss collision candidates the further you progress in the outer loop.

Implementing seek behaviour of objects on a plane in OpenSceneGraph?

I have created an open plane area with thin cylinders on it like pucks, they bounce around the area and have collision detection for some larger cylinders also placed on the plane. I am trying to get them to now head towards a set point on the plane using a steering method.
The steering works for works for avoiding the obstacles by calculating distance from obstacle then calculating angle between direction travelling and the direction of the obstacle, using the calculation of the distance from obstacle when the puck is too close it steers left or right based on the calculated angle. The same technique reversed fails to work for steering towards a point, I have tried using both acos and atan2 to calculate the angle between direction travelling and target direction and from outputs believe this bit is right but when I try to use that calculation to determine when to steer towards the target I get unexpected results. Sometimes random turning?
#include "Puck.h"
#include <iostream>
#include <fstream>
using namespace std;
#include <math.h>
ofstream fout("danna.txt");
#ifndef M_PI
#define M_PI 3.1415
#endif
class TranslateCB : public osg::NodeCallback
{
public:
TranslateCB() : _dx( 0. ), _dy( 0. ), _dirx(1), _diry(0), _inc(0.1), _theta(0) {}
TranslateCB(Puck** pp, Obstacle** ob, int count, double r, double x, double y) : _dx( 0. ), _dy( 0. ),
_dirx(2.0*rand()/RAND_MAX-1), _diry(2.0*rand()/RAND_MAX-1), _inc(0.3), _theta(0)
{
obstacles = ob;
ob_count = count;
_radius = r;
_x = x;
_y = y;
puckH = pp;
}
virtual void operator()( osg::Node* node,osg::NodeVisitor* nv )
{
osg::MatrixTransform* mt =
dynamic_cast<osg::MatrixTransform*>( node );
osg::Matrix mR, mT;
mT.makeTranslate( _dx , _dy, 0. );
mt->setMatrix( mT );
double ob_dirx;
double ob_diry;
double ob_dist;
double centerX=0, centerY =0;
_theta = 0;
double min = 4;
// location that I am trying to get the pucks to head towards
centerX = 1;
centerY = 5;
double tDirx = (_x+_dx) - centerX;
double tDiry = (_y+_dy) - centerY;
double tDist = sqrt(tDirx*tDirx+tDiry*tDiry); //distance to target location
// normalizing my target direction
tDirx = tDirx/tDist;
tDiry = tDiry/tDist;
double hDist = sqrt(_dirx*_dirx + _diry*_diry); //distance to next heading
_dirx= _dirx/hDist;
_diry= _diry/hDist;
double cAngle = acos(_dirx*tDirx+_diry*tDiry); //using inverse of cos to calculate angle between directions
double tAngle = atan2(centerY - (_y+_dy),centerX - (_x+_dx)); // using inverse of tan to calculate angle between directions
double tMin = tDist*sin(cAngle);
//if statement used to define when to apply steering direction
if(tMin > 3)
{
if(tDist < 1){ _theta = 0; } //puck is inside target location, so keep travelling straight
if(cAngle > M_PI/2){ _theta = -0.1; } //turn left
else{ _theta = 0.1; } //turn right
}
else{ _theta = 0; }
////// The collision detection for the obstacles that works on the same princables that I am using above
for(int i = 0; i < ob_count; i++)
{
ob_dirx = (_x+_dx) - obstacles[i]->x;
ob_diry = (_y+_dy) - obstacles[i]->y;
ob_dist = sqrt(ob_dirx*ob_dirx+ob_diry*ob_diry);
if (ob_dist < 3) {
//normalise directions
double ob_norm = sqrt(ob_dirx*ob_dirx+ob_diry*ob_diry);
ob_dirx = (ob_dirx)/ob_norm;
ob_diry = (ob_diry)/ob_norm;
double norm = sqrt(_dirx*_dirx+_diry*_diry);
_dirx = (_dirx)/norm;
_diry = (_diry)/norm;
//calculate angle between direction travelling, and direction to obstacle
double angle = acos(_dirx*ob_dirx + _diry*ob_diry);
//calculate closest distance between puck and obstacle if continues on same path
double min_dist = ob_dist*sin(angle);
if(min_dist < _radius + obstacles[i]->radius && ob_dist < min+obstacles[i]->radius)
{
min = ob_dist;
if(ob_dist < _radius + obstacles[i]->radius){ _theta = 0; }
else if(angle <= M_PI/2){ _theta = -0.3; }
else{ _theta = 0.3; }
}
}
}
//change direction accordingly
_dirx = _dirx*cos(_theta) + _diry*sin(_theta);
_diry = _diry*cos(_theta) - _dirx*sin(_theta);
_dx += _inc*_dirx;
if((_x+_dx > 20 && _dirx > 0) || (_x+_dx < -20 && _dirx < 0))
{
_dirx = -_dirx;
_diry += (0.2*rand()/RAND_MAX-0.1); //add randomness to bounce
}
_dy += _inc*_diry;
if((_y+_dy > 20 && _diry > 0) || (_y+_dy < -20 && _diry < 0))
{
_diry = -_diry;
_dirx += (0.2*rand()/RAND_MAX-0.1); //add randomness to bounce
}
traverse( node, nv );
}
private:
double _dx,_dy;
double _dirx,_diry;
double _inc;
double _theta;
double _radius;
double _x,_y;
Obstacle** obstacles;
Puck** puckH;
int ob_count;
};
Puck::Puck()
{
}
void Puck::createBoids (Puck** pucks, Group *root, Obstacle** obstacles, int count, double xx, double yy)
{
// geometry
radius = 0.2;
x = xx;
y = yy;
ob_count = count;
Cylinder *shape=new Cylinder(Vec3(x,y,0),radius,0.1);
ShapeDrawable *draw=new ShapeDrawable(shape);
draw->setColor(Vec4(1,0,0,1));
Geode *geode=new Geode();
geode->addDrawable(draw);
// transformation
MatrixTransform *T=new MatrixTransform();
TranslateCB *tcb = new TranslateCB(pucks, obstacles,ob_count,radius,x,y);
T->setUpdateCallback(tcb);
T->addChild(geode);
root->addChild(T);
}
any help would be amazing!
The problem here is that the technique that "works" for avoiding obstacles will always occur when the puck is heading towards the obstacle. This special condition makes both the direction of the puck and the direction of the obstacle in adjacent quadrants.
When attempting to make the pucks steer towards the obstacle however, the technique breaks down because the puck most likely will be heading away from the obstacle, no longer having the condition that the target and direction vectors are in adjacent quadrants.
The correct way to determine the steering direction is to rotate the target vector by an angle that would make the the direction vector point straight up in the quadrants (0, 1). Now that the target vector is relative to the direction vector (0, 1) looking at the x component of the target vector will determine the steering direction. If the x component of the target vector is negative, the puck must turn left to steer towards the target (increase the angle). If the x component of the target vector is positive, the puck must turn right to steer towards the target (decrease the angle).
Consider the following snippet written in python to verify this, it should still be easy to read for you to grasp the concept:
from math import *
dirX = 0.0
dirY = 0.0
targX = 1.0
targY = 0.0
def dir():
global dirX, dirY, targX, targY
# get magnitiude of direction
mag1 = sqrt(dirX*dirX + dirY*dirY)
if mag1 != 0:
# normalize direction vector
normX = dirX / mag1
normY = dirY / mag1
# get magnitude of target vector
mag2 = sqrt(targX*targX + targY*targY)
if mag2 != 0:
# normalize target vector
targX = targX / mag2
targY = targY / mag2
# find the angle need to rotate the dir vector to (0, 1)
rotateAngle = (pi/2.0) - atan2(normY, normX)
# rotate targ vector by that angle (we only care about the x component)
relTargX = cos(rotateAngle) * normX + sin(rotateAngle) * normY
# if the target vector's x is negative
if relTargX < 0:
# turn left
print "Left!"
# otherwise the target vector is 0 or positive
else:
# turn right
print "Right!"
def out():
global dirX, dirY, targX, targY
# function just prints values to the screen
print "dir(%f, %f) targ(%f, %f)" % (dirX, dirY, targX, targY)
# for values 0 to 360
for i in range(360):
# pretend this is the pucks direction
dirX = sin(radians(i))
dirY = cos(radians(i))
# print dir and target vectors to screen
out()
# print the direction to turn
dir()
I suppose I could've written this in C++, but compared to running a python prompt it's a royal pain. It is as readable as any pseudo code I could've written and the concepts will work regardless of language.