How to fit the 2D scatter data with a line with C++ - c++

I used to work with MATLAB, and for the question I raised I can use p = polyfit(x,y,1) to estimate the best fit line for the scatter data in a plate. I was wondering which resources I can rely on to implement the line fitting algorithm with C++. I understand there are a lot of algorithms for this subject, and for me I expect the algorithm should be fast and meantime it can obtain the comparable accuracy of polyfit function in MATLAB.

This page describes the algorithm easier than Wikipedia, without extra steps to calculate the means etc. : http://faculty.cs.niu.edu/~hutchins/csci230/best-fit.htm . Almost quoted from there, in C++ it's:
#include <vector>
#include <cmath>
struct Point {
double _x, _y;
};
struct Line {
double _slope, _yInt;
double getYforX(double x) {
return _slope*x + _yInt;
}
// Construct line from points
bool fitPoints(const std::vector<Point> &pts) {
int nPoints = pts.size();
if( nPoints < 2 ) {
// Fail: infinitely many lines passing through this single point
return false;
}
double sumX=0, sumY=0, sumXY=0, sumX2=0;
for(int i=0; i<nPoints; i++) {
sumX += pts[i]._x;
sumY += pts[i]._y;
sumXY += pts[i]._x * pts[i]._y;
sumX2 += pts[i]._x * pts[i]._x;
}
double xMean = sumX / nPoints;
double yMean = sumY / nPoints;
double denominator = sumX2 - sumX * xMean;
// You can tune the eps (1e-7) below for your specific task
if( std::fabs(denominator) < 1e-7 ) {
// Fail: it seems a vertical line
return false;
}
_slope = (sumXY - sumX * yMean) / denominator;
_yInt = yMean - _slope * xMean;
return true;
}
};
Please, be aware that both this algorithm and the algorithm from Wikipedia ( http://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line ) fail in case the "best" description of points is a vertical line. They fail because they use
y = k*x + b
line equation which intrinsically is not capable to describe vertical lines. If you want to cover also the cases when data points are "best" described by vertical lines, you need a line fitting algorithm which uses
A*x + B*y + C = 0
line equation. You can still modify the current algorithm to produce that equation:
y = k*x + b <=>
y - k*x - b = 0 <=>
B=1, A=-k, C=-b
In terms of the above code:
B=1, A=-_slope, C=-_yInt
And in "then" block of the if checking for denominator equal to 0, instead of // Fail: it seems a vertical line, produce the following line equation:
x = xMean <=>
x - xMean = 0 <=>
A=1, B=0, C=-xMean
I've just noticed that the original article I was referring to has been deleted. And this web page proposes a little different formula for line fitting: http://hotmath.com/hotmath_help/topics/line-of-best-fit.html
double denominator = sumX2 - 2 * sumX * xMean + nPoints * xMean * xMean;
...
_slope = (sumXY - sumY*xMean - sumX * yMean + nPoints * xMean * yMean) / denominator;
The formulas are identical because nPoints*xMean == sumX and nPoints*xMean*yMean == sumX * yMean == sumY * xMean.

I would suggest coding it from scratch. It is a very simple implementation in C++. You can code up both the intercept and gradient for least-squares fit (the same method as polyfit) from your data directly from the formulas here
http://en.wikipedia.org/wiki/Simple_linear_regression#Fitting_the_regression_line
These are closed form formulas that you can easily evaluate yourself using loops. If you were using higher degree fits then I would suggest a matrix library or more sophisticated algorithms but for simple linear regression as you describe above this is all you need. Matrices and linear algebra routines would be overkill for such a problem (in my opinion).

Equation of line is Ax + By + C=0.
So it can be easily( when B is not so close to zero ) convert to y = (-A/B)*x + (-C/B)
typedef double scalar_type;
typedef std::array< scalar_type, 2 > point_type;
typedef std::vector< point_type > cloud_type;
bool fit( scalar_type & A, scalar_type & B, scalar_type & C, cloud_type const& cloud )
{
if( cloud.size() < 2 ){ return false; }
scalar_type X=0, Y=0, XY=0, X2=0, Y2=0;
for( auto const& point: cloud )
{ // Do all calculation symmetric regarding X and Y
X += point[0];
Y += point[1];
XY += point[0] * point[1];
X2 += point[0] * point[0];
Y2 += point[1] * point[1];
}
X /= cloud.size();
Y /= cloud.size();
XY /= cloud.size();
X2 /= cloud.size();
Y2 /= cloud.size();
A = - ( XY - X * Y ); //!< Common for both solution
scalar_type Bx = X2 - X * X;
scalar_type By = Y2 - Y * Y;
if( fabs( Bx ) < fabs( By ) ) //!< Test verticality/horizontality
{ // Line is more Vertical.
B = By;
std::swap(A,B);
}
else
{ // Line is more Horizontal.
// Classical solution, when we expect more horizontal-like line
B = Bx;
}
C = - ( A * X + B * Y );
//Optional normalization:
// scalar_type D = sqrt( A*A + B*B );
// A /= D;
// B /= D;
// C /= D;
return true;
}

You can also use or go over this implementation there is also documentation here.

Fitting a Line can be acomplished in different ways.
Least Square means minimizing the sum of the squared distance.
But you could take another cost function as example the (not squared) distance. But normaly you use the squred distance (Least Square).
There is also a possibility to define the distance in different ways. Normaly you just use the "y"-axis for the distance. But you could also use the total/orthogonal distance. There the distance is calculated in x- and y-direction. This can be a better fit if you have also errors in x direction (let it be the time of measurment) and you didn't start the measurment on the exact time you saved in the data. For Least Square and Total Least Square Line fit exist algorithms in closed form. So if you fitted with one of those you will get the line with the minimal sum of the squared distance to the datapoints. You can't fit a better line in the sence of your defenition. You could just change the definition as examples taking another cost function or defining distance in another way.
There is a lot of stuff about fitting models into data you could think of, but normaly they all use the "Least Square Line Fit" and you should be fine most times. But if you have a special case it can be necessary to think about what your doing. Taking Least Square done in maybe a few minutes. Thinking about what Method fits you best to the problem envolves understanding the math, which can take indefinit time :-).

Note: This answer is NOT AN ANSWER TO THIS QUESTION but to this one "Line closest to a set of points" that has been flagged as "duplicate" of this one (incorrectly in my opinion), no way to add new answers to it.
The question asks for:
Find the line whose distance from all the points is minimum ? By
distance I mean the shortest distance between the point and the line.
The most usual interpretation of distance "between the point and the line" is the euclidean distance and the most common interpretation of "from all points" is the sum of distances (in absolute or squared value).
When the target is minimize the sum of squared euclidean distances, the linear regression (LST) is not the algorithm to use. In addition, linear regression can not result in a vertical line. The algorithm to be used is the "total least squares". See by example wikipedia for the problem description and this answer in math stack exchange for details about the formulation.

to fit a line y=param[0]x+param[1] simply do this:
// loop over data:
{
sum_x += x[i];
sum_y += y[i];
sum_xy += x[i] * y[i];
sum_x2 += x[i] * x[i];
}
// means
double mean_x = sum_x / ninliers;
double mean_y = sum_y / ninliers;
float varx = sum_x2 - sum_x * mean_x;
float cov = sum_xy - sum_x * mean_y;
// check for zero varx
param[0] = cov / varx;
param[1] = mean_y - param[0] * mean_x;
More on the topic http://easycalculation.com/statistics/learn-regression.php
(formulas are the same, they just multiplied and divided by N, a sample sz.). If you want to fit plane to 3D data use a similar approach -
http://www.mymathforum.com/viewtopic.php?f=13&t=8793
Disclaimer: all quadratic fits are linear and optimal in a sense that they reduce the noise in parameters. However, you might interested in the reducing noise in the data instead. You might also want to ignore outliers since they can bia s your solutions greatly. Both problems can be solved with RANSAC. See my post at:

Related

Kernel Density Estimator ( with Gauss Kernel ) Sum f(x) = 1?

I want to use KDE with the Gaussian Kernel. If I'm correct, the sum of all f(x) must be 1 ( ~ rounding ) ?
My Implementation looks like this:
float K( float const& val)
{
const float p=1.0 / std::sqrt( 2.0 * M_PI);
float result = 0.5 * (val*val);
result = p * std::exp(- result);
return result;
};
std::vector< std::pair<float, float> kde( float *val, int len float h)
{
std::vector< std::pair<float, float>> density( len );
const float p = 1.0 / (h * len );
for(int r=0;r<len;r++)
{
float sum = 0;
for(int i=0;i<len;i++)
sum += k( (val[r] - val[i]) / h );
density[r] = std::make_pair( val[r], p*sum );
}
return density;
}
And I choosed h > 0. Am i right that p*sum is the probability for the value val[r] ? The sum over all probability is > 1 ( but looks ok for me ).
You misinterpreted the assumptions on the probability density here. The density integrates to one, whereas its values at certain points are definitely not 1.
Let's discuss it using the following formula from the linked Wikipedia article which you seem to use:
This formula provides the density f_h(x) evaluated at point x.
From my review, your code correctly evaluates this quantity. Yet, you misinterpreted the quantity which should be one. As a density, the integral over the complete space should yield one, i.e.
This property is called normalization of the density.
Moreover, being a density itself, each summand of f_h(x) should yield 1/n when integrated over the whole space, when one also includes the normalization constant. Again, there's no guarantee on the values of the summands.
In one dimension, you can easily confirm the normalization by using the trapezoidal rule or another quadrature scheme (--if you provide a working example, I can try to do that.)

C++ floating point comparison

Suppose you have have a rectangle, bottom-left point 0,0 and upper-right point is 100,100.
Now two line intersects the rectangle. I have to find out the coordinate of the intersection point. I have done that. Now the problem is I can't tell whether it is inside the rectangle or not. I used double comparison. But I think it is giving me wrong answer. Suppose the intersection point is ( x , y ). I used this checking for comparison : if( x >= 0.0 && x <= 100.0 && y >= 0.0 && y <= 100.0 ). What should I do?
//this function generates line
line genline( int x1 , int y1 , int x2 , int y2 ){
line l ;
l.A = y2 - y1 ;
l.B = x1 - x2 ;
l.C = l.A * x1 + l.B * y1 ;
return l ;
}
//this function checks intersection
bool intersect( line m ,line n ) {
int det = m.A * n.B - m.B * n.A ;
if( det == 0 ){
return false ;
}
else {
double x = ( n.B * m.C - m.B * n.C ) / ( det * 1.0 ) ;
double y = ( m.A * n.C - n.A * m.C ) / ( det * 1.0 ) ;
if( x >= 0.0 && x <= L && y >= 0.0 && y <= W ) { return true ; }
else{ return false ; }
}
}
EDIT:
Both the line are stretched to infinity.
Your math looks like it's right. By the way, If a line intersects something, it is always inside that something.
Checking to see if a point is inside a rectangle is relatively easy. However, the challenge is to find the intersection between two line segments. There are a large number of corner cases to that problem and limited accuracy of floating point numbers play a huge roll here.
Your algorithm seems to be overly simplistic. For a deeper discussion about this topic you can look at this and this. This two parts article investigates the problem of finding the intersection of two lines using floating point numbers. Notice that they are about MATLAB not C++ though that does not change the problem and the algorithms are easily translatable to any language.
Depending on application, even with clever tricks floating point representation might not simply cut it for some geometry problems. CGAL is a C++ library dedicated to computational geometry that deals with these kind problems. When necessary it uses arbitrary precision arithmetic to handle degenerate cases.
When you're dealing with floating point (or double), testing for equality is naïve and will fail in edge cases. Every comparison you make should be in reference to "epsilon", an extremely small quantity that doesn't matter. If two numbers are within epsilon for each other, then they are considered equal.
For example, instead of "if(a == b)", you need:
bool isEqual(double a, double b, double epsilon = 1.E-10)
{ return fabs(a - b) <= epsilon;
}
Pick a suitable value for epsilon depending on your problem domain.

sin and cos are slow, is there an alternatve?

My game needs to move by a certain angle. To do this I get the vector of the angle via sin and cos. Unfortunately sin and cos are my bottleneck. I'm sure I do not need this much precision. Is there an alternative to a C sin & cos and look-up table that is decently precise but very fast?
I had found this:
float Skeleton::fastSin( float x )
{
const float B = 4.0f/pi;
const float C = -4.0f/(pi*pi);
float y = B * x + C * x * abs(x);
const float P = 0.225f;
return P * (y * abs(y) - y) + y;
}
Unfortunately, this does not seem to work. I get significantly different behavior when I use this sin rather than C sin.
Thanks
A lookup table is the standard solution. You could Also use two lookup tables on for degrees and one for tenths of degrees and utilize sin(A + B) = sin(a)cos(b) + cos(A)sin(b)
For your fastSin(), you should check its documentation to see what range it's valid on. The units you're using for your game could be too big or too small and scaling them to fit within that function's expected range could make it work better.
EDIT:
Someone else mentioned getting it into the desired range by subtracting PI, but apparently there's a function called fmod for doing modulus division on floats/doubles, so this should do it:
#include <iostream>
#include <cmath>
float fastSin( float x ){
x = fmod(x + M_PI, M_PI * 2) - M_PI; // restrict x so that -M_PI < x < M_PI
const float B = 4.0f/M_PI;
const float C = -4.0f/(M_PI*M_PI);
float y = B * x + C * x * std::abs(x);
const float P = 0.225f;
return P * (y * std::abs(y) - y) + y;
}
int main() {
std::cout << fastSin(100.0) << '\n' << std::sin(100.0) << std::endl;
}
I have no idea how expensive fmod is though, so I'm going to try a quick benchmark next.
Benchmark Results
I compiled this with -O2 and ran the result with the Unix time program:
int main() {
float a = 0;
for(int i = 0; i < REPETITIONS; i++) {
a += sin(i); // or fastSin(i);
}
std::cout << a << std::endl;
}
The result is that sin is about 1.8x slower (if fastSin takes 5 seconds, sin takes 9). The accuracy also seemed to be pretty good.
If you chose to go this route, make sure to compile with optimization on (-O2 in gcc).
I know this is already an old topic, but for people who have the same question, here is a tip.
A lot of times in 2D and 3D rotation, all vectors are rotated with a fixed angle. In stead of calling the cos() or sin() every cycle of the loop, create variable before the loop which contains the value of cos(angle) or sin(angle) already. You can use this variable in your loop. This way the function only has to be called once.
If you rephrase the return in fastSin as
return (1-P) * y + P * (y * abs(y))
And rewrite y as (for x>0 )
y = 4 * x * (pi-x) / (pi * pi)
you can see that y is a parabolic first-order approximation to sin(x) chosen so that it passes through (0,0), (pi/2,1) and (pi,0), and is symmetrical about x=pi/2.
Thus we can only expect our function to be a good approximation from 0 to pi. If we want values outside that range we can use the 2-pi periodicity of sin(x) and that sin(x+pi) = -sin(x).
The y*abs(y) is a "correction term" which also passes through those three points. (I'm not sure why y*abs(y) is used rather than just y*y since y is positive in the 0-pi range).
This form of overall approximation function guarantees that a linear blend of the two functions y and y*y, (1-P)*y + P * y*y will also pass through (0,0), (pi/2,1) and (pi,0).
We might expect y to be a decent approximation to sin(x), but the hope is that by picking a good value for P we get a better approximation.
One question is "How was P chosen?". Personally, I'd chose the P that produced the least RMS error over the 0,pi/2 interval. (I'm not sure that's how this P was chosen though)
Minimizing this wrt. P gives
This can be rearranged and solved for p
Wolfram alpha evaluates the initial integral to be the quadratic
E = (16 π^5 p^2 - (96 π^5 + 100800 π^2 - 967680)p + 651 π^5 - 20160 π^2)/(1260 π^4)
which has a minimum of
min(E) = -11612160/π^9 + 2419200/π^7 - 126000/π^5 - 2304/π^4 + 224/π^2 + (169 π)/420
≈ 5.582129689596371e-07
at
p = 3 + 30240/π^5 - 3150/π^3
≈ 0.2248391013559825
Which is pretty close to the specified P=0.225.
You can raise the accuracy of the approximation by adding an additional correction term. giving a form something like return (1-a-b)*y + a y * abs(y) + b y * y * abs(y). I would find a and b by in the same way as above, this time giving a system of two linear equations in a and b to solve, rather than a single equation in p. I'm not going to do the derivation as it is tedious and the conversion to latex images is painful... ;)
NOTE: When answering another question I thought of another valid choice for P.
The problem is that using reflection to extend the curve into (-pi,0) leaves a kink in the curve at x=0. However, I suspect we can choose P such that the kink becomes smooth.
To do this take the left and right derivatives at x=0 and ensure they are equal. This gives an equation for P.
You can compute a table S of 256 values, from sin(0) to sin(2 * pi). Then, to pick sin(x), bring back x in [0, 2 * pi], you can pick 2 values S[a], S[b] from the table, such as a < x < b. From this, linear interpolation, and you should have a fair approximation
memory saving trick : you actually need to store only from [0, pi / 2], and use symmetries of sin(x)
enhancement trick : linear interpolation can be a problem because of non-smooth derivatives, humans eyes is good at spotting such glitches in animation and graphics. Use cubic interpolation then.
What about
x*(0.0174532925199433-8.650935142277599*10^-7*x^2)
for deg and
x*(1-0.162716259904269*x^2)
for rad on -45, 45 and -pi/4 , pi/4 respectively?
This (i.e. the fastsin function) is approximating the sine function using a parabola. I suspect it's only good for values between -π and +π. Fortunately, you can keep adding or subtracting 2π until you get into this range. (Edited to specify what is approximating the sine function using a parabola.)
you can use this aproximation.
this solution use a quadratic curve :
http://www.starming.com/index.php?action=plugin&v=wave&ajax=iframe&iframe=fullviewonepost&mid=56&tid=4825

OpenCV detect if points lie along line/plane

I am working on a form of autocalibration for an optics device which is currently performed manually. The first part of the calibration is to determine whether a light beam has illuminated the set of 'calibration' points.
I am using OpenCV and have thresholded and cropped the image to leave only the possible relevant points. I know want to determine if these points lie along a stright (horizontal) line; if they a sufficient number do the beam is in the correct position! (The points lie in a straight line but the beam is often bent so hitting most of the points suffices, there are 21 points which show up as white circles when thresholded).
I have tried using a histogram but on the thresholded image the results are not correct and am now looking at Hough lines, but this detects straight lines from edges wwhere as I want to establish if detected points lie on a line.
This is the threshold code I use:
cvThreshold(output, output, 150, 256, CV_THRESH_BINARY);
The histogram results with anywhere from 1 to 640 bins (image width) is two lines at 0 and about 2/3rds through of near max value. Not the distribution expected or obtained without thresholding.
Some pictures to try to illistrate the point (note the 'noisy' light spots which are a feature of the system setup and cannot be overcome):
12 points in a stright line next to one another (beam in correct position)
The sort of output wanted (for illistration, if the points are on the line this is all I need to know!)
Any help would be greatly appreciated. One thought was to extract the co-ordinates of the points and compare them but I don't know how to do this.
Incase it helps anyone here is a very basic (the first draft) of some simple linaear regression code I used.
// Calculate the averages of arrays x and y
double xa = 0, ya = 0;
for(int i = 0; i < n; i++)
{
xa += x[i];
ya += y[i];
}
xa /= n;
ya /= n;
// Summation of all X and Y values
double sumX = 0;
double sumY = 0;
// Summation of all X*Y values
double sumXY = 0;
// Summation of all X^2 and Y^2 values
double sumXs = 0;
double sumYs = 0;
for(int i = 0; i < n; i++)
{
sumX = sumX + x[i];
sumY = sumY + y[i];
sumXY = sumXY + (x[i] * y[i]);
sumXs = sumXs + (x[i] * x[i]);
sumYs = sumYs + (y[i] * y[i]);
}
// (X^2) and (Y^2) sqaured
double Xs = sumX * sumX;
double Ys = sumY * sumY;
// Calculate slope, m
slope = (n * sumXY - sumX * sumY) / (n* sumXs - Xs);
// Calculate intercept
intercept = ceil((sumY - slope * sumX) / n);
// Calculate regression index, r^2
double r_top = (n * sumXY - sumX * sumY);
double r_bottom = sqrt((n* sumXs - Xs) * (n* sumYs - Ys));
double r = 0;
// Check line is not perfectly vertical or horizontal
if(r_top == 0 || r_bottom == 0)
r = 0;
else
r = r_top/ r_bottom;
There are more efficeint ways of doing this (see CodeCogs or AGLIB) but as quick fix this code seems to work.
To detect Circles in OpenCV I dropped the Hough Transform and adapeted codee from this post:
Detection of coins (and fit ellipses) on an image
It is then a case of refining the co-ordinates (removing any outliers etc) to determine if the circles lie on a horizontal line from the slope and intercept values of the regression.
Obtain the x,y coordinates of the thresholded points, then perform a linear regression to find a best-fit line. With that line, you can determine the r^2 value which effectively gives you the quality of fit. Based on that fitness measure, you can determine your calibration success.
Here is a good discussion.
you could do something like this, altough it is an aproximation:
var dw = decide a medium dot width in pixels
maxdots = 0;
for each line of the image {
var dots = 0;
scan by incrementing x by dw {
if (color==dotcolor) dots++;
}
if (dots>maxdots) maxdots=dots;
}
maxdots would be the best result...

Create sine lookup table in C++

How can I rewrite the following pseudocode in C++?
real array sine_table[-1000..1000]
for x from -1000 to 1000
sine_table[x] := sine(pi * x / 1000)
I need to create a sine_table lookup table.
You can reduce the size of your table to 25% of the original by only storing values for the first quadrant, i.e. for x in [0,pi/2].
To do that your lookup routine just needs to map all values of x to the first quadrant using simple trig identities:
sin(x) = - sin(-x), to map from quadrant IV to I
sin(x) = sin(pi - x), to map from quadrant II to I
To map from quadrant III to I, apply both identities, i.e. sin(x) = - sin (pi + x)
Whether this strategy helps depends on how much memory usage matters in your case. But it seems wasteful to store four times as many values as you need just to avoid a comparison and subtraction or two during lookup.
I second Jeremy's recommendation to measure whether building a table is better than just using std::sin(). Even with the original large table, you'll have to spend cycles during each table lookup to convert the argument to the closest increment of pi/1000, and you'll lose some accuracy in the process.
If you're really trying to trade accuracy for speed, you might try approximating the sin() function using just the first few terms of the Taylor series expansion.
sin(x) = x - x^3/3! + x^5/5! ..., where ^ represents raising to a power and ! represents the factorial.
Of course, for efficiency, you should precompute the factorials and make use of the lower powers of x to compute higher ones, e.g. use x^3 when computing x^5.
One final point, the truncated Taylor series above is more accurate for values closer to zero, so its still worthwhile to map to the first or fourth quadrant before computing the approximate sine.
Addendum:
Yet one more potential improvement based on two observations:
1. You can compute any trig function if you can compute both the sine and cosine in the first octant [0,pi/4]
2. The Taylor series expansion centered at zero is more accurate near zero
So if you decide to use a truncated Taylor series, then you can improve accuracy (or use fewer terms for similar accuracy) by mapping to either the sine or cosine to get the angle in the range [0,pi/4] using identities like sin(x) = cos(pi/2-x) and cos(x) = sin(pi/2-x) in addition to the ones above (for example, if x > pi/4 once you've mapped to the first quadrant.)
Or if you decide to use a table lookup for both the sine and cosine, you could get by with two smaller tables that only covered the range [0,pi/4] at the expense of another possible comparison and subtraction on lookup to map to the smaller range. Then you could either use less memory for the tables, or use the same memory but provide finer granularity and accuracy.
long double sine_table[2001];
for (int index = 0; index < 2001; index++)
{
sine_table[index] = std::sin(PI * (index - 1000) / 1000.0);
}
One more point: calling trigonometric functions is pricey. if you want to prepare the lookup table for sine with constant step - you may save the calculation time, in expense of some potential precision loss.
Consider your minimal step is "a". That is, you need sin(a), sin(2a), sin(3a), ...
Then you may do the following trick: First calculate sin(a) and cos(a). Then for every consecutive step use the following trigonometric equalities:
sin([n+1] * a) = sin(n*a) * cos(a) + cos(n*a) * sin(a)
cos([n+1] * a) = cos(n*a) * cos(a) - sin(n*a) * sin(a)
The drawback of this method is that during this procedure the round-off error is accumulated.
double table[1000] = {0};
for (int i = 1; i <= 1000; i++)
{
sine_table[i-1] = std::sin(PI * i/ 1000.0);
}
double getSineValue(int multipleOfPi){
if(multipleOfPi == 0) return 0.0;
int sign = 1;
if(multipleOfPi < 0){
sign = -1;
}
return signsine_table[signmultipleOfPi - 1];
}
You can reduce the array length to 500, by a trick sin(pi/2 +/- angle) = +/- cos(angle).
So store sin and cos from 0 to pi/4.
I don't remember from top of my head but it increased the speed of my program.
You'll want the std::sin() function from <cmath>.
another approximation from a book or something
streamin ramp;
streamout sine;
float x,rect,k,i,j;
x = ramp -0.5;
rect = x * (1 - x < 0 & 2);
k = (rect + 0.42493299) *(rect -0.5) * (rect - 0.92493302) ;
i = 0.436501 + (rect * (rect + 1.05802));
j = 1.21551 + (rect * (rect - 2.0580201));
sine = i*j*k*60.252201*x;
full discussion here:
http://synthmaker.co.uk/forum/viewtopic.php?f=4&t=6457&st=0&sk=t&sd=a
I presume that you know, that using a division is a lot slower than multiplying by decimal number, /5 is always slower than *0.2
it's just an approximation.
also:
streamin ramp;
streamin x; // 1.5 = Saw 3.142 = Sin 4.5 = SawSin
streamout sine;
float saw,saw2;
saw = (ramp * 2 - 1) * x;
saw2 = saw * saw;
sine = -0.166667 + saw2 * (0.00833333 + saw2 * (-0.000198409 + saw2 * (2.7526e-006+saw2 * -2.39e-008)));
sine = saw * (1+ saw2 * sine);