In the past few weeks I was wondering what is the point that people trying to re-invent the wheel and spend hours to write their own sqrt function for example. The built-in version is optimized well, precise and stable enough.
I am speaking about the Carmack-style Square Root for example. What is the point? It will lose precision during the approximation and it uses casting.
Intel style SSE Square Root was giving precise results, but was slower in my calculations than the standard SQRT.
By average, all the above tricks were beaten by far the standard SQRT. So my question is, what is the point?
My PC has the below CPU:
Intel(R) Core(TM) i7-6700HQ CPU # 2.60GHz.
I've got the following results for each method (I've fixed the performance test according to the below suggestion by the helpful comment, thanks for that n.m.):
(Please keep in mind that if you're using approximation like Newton method, you'll lose precision so you must align your calculation accordingly.)
You can find the source code below for reference.
#include <chrono>
#include <cmath>
#include <deque>
#include <iomanip>
#include <iostream>
#include <immintrin.h>
#include <random>
using f64 = double;
using s64 = int64_t;
using u64 = uint64_t;
static constexpr u64 cycles = 24;
static constexpr u64 sample_max = 1000000;
f64 sse_sqrt(const f64 x) {
__m128d root = _mm_sqrt_pd(_mm_load_pd(&x));
return *(reinterpret_cast<f64*>(&root));
}
constexpr f64 carmack_sqrt(const f64 x) {
union {
f64 x;
s64 i;
} u = {};
u.x = x;
u.i = 0x5fe6eb50c7b537a9 - (u.i >> 1);
f64 xhalf = 0.5 * x;
u.x = u.x * (1.5 - xhalf * u.x * u.x);
# u.x = u.x * (1.5 - xhalf * u.x * u.x);
# u.x = u.x * (1.5 - xhalf * u.x * u.x);
# ... so on, if you want more precise result ...
return u.x * x;
}
int main(int /* argc */, char ** /*argv*/) {
std::random_device r;
std::default_random_engine e(r());
std::uniform_real_distribution<f64> dist(1, sample_max);
std::deque<f64> samples(sample_max);
for (auto& sample : samples) {
sample = dist(e);
}
// std sqrt
{
std::cout << "> Measuring std sqrt.\r\n> Please wait . . .\r\n";
f64 result = 0;
auto t1 = std::chrono::high_resolution_clock::now();
for (auto cycle = 0; cycle < cycles; ++cycle) {
for (auto& sample : samples) {
result += std::sqrt(static_cast<f64>(sample));
}
}
auto t2 = std::chrono::high_resolution_clock::now();
auto dt = t2 - t1;
std::cout << "> Accumulated result: " << std::setprecision(19) << result << "\n";
std::cout << "> Total execution time: " <<
std::chrono::duration_cast<std::chrono::milliseconds>(dt).count() << " ms.\r\n\r\n";
}
// sse sqrt
{
std::cout << "> Measuring sse sqrt.\r\n> Please wait . . .\r\n";
f64 result = 0;
auto t1 = std::chrono::high_resolution_clock::now();
for (auto cycle = 0; cycle < cycles; ++cycle) {
for (auto& sample : samples) {
result += sse_sqrt(static_cast<f64>(sample));
}
}
auto t2 = std::chrono::high_resolution_clock::now();
auto dt = t2 - t1;
std::cout << "> Accumulated result: " << std::setprecision(19) << result << "\n";
std::cout << "> Total execution time: " <<
std::chrono::duration_cast<std::chrono::milliseconds>(dt).count() << " ms.\r\n\r\n";
}
// carmack sqrt
{
std::cout << "> Measuring carmack sqrt.\r\n> Please wait . . .\r\n";
f64 result = 0;
auto t1 = std::chrono::high_resolution_clock::now();
for (auto cycle = 0; cycle < cycles; ++cycle) {
for (auto& sample : samples) {
result += carmack_sqrt(static_cast<f64>(sample));
}
}
auto t2 = std::chrono::high_resolution_clock::now();
auto dt = t2 - t1;
std::cout << "> Accumulated result: " << std::setprecision(19) << result << "\n";
std::cout << "> Total execution time: " <<
std::chrono::duration_cast<std::chrono::milliseconds>(dt).count() << " ms.\r\n\r\n";
}
std::cout << "> Press any key to exit . . .\r\n";
std::getchar();
return 0;
}
Please note that I am not here for criticizing anybody, I am just here for learning, experimenting and trying to figure out my own method and the best toolset to choose from.
I am writing my own game engine to one of my portfolio. I am appreciating your kind answers and I am open for any suggestions.
Have a nice day.
That fast reciprocal square root trick is mostly obsolete. SSE's built in approximate reciprocal square root which exists since the Pentium 3 has completely replaced it on the PC platform. Other platforms usually have their own reciprocal square root, for example ARM has VRSQRTE and a handy instruction that does the Newton step too.
By the way, turning the result into a non-reciprocal square root usually makes it less useful: the primary use case is normalizing a vector, where a "straight" square root is annoying (it would have to be divided by) while a reciprocal square root fits exactly (then it's a multiply).
As often, your benchmark isn't quite accurate. I happen to have done some relevant tests a while ago, where the relevant parts look like this:
std::sqrt based:
HMM_INLINE float HMM_LengthVec4(hmm_vec4 A)
{
float Result = std::sqrt(HMM_LengthSquaredVec4(A));
return(Result);
}
HMM_INLINE hmm_vec4 HMM_NormalizeVec4(hmm_vec4 A)
{
hmm_vec4 Result = {0};
float VectorLength = HMM_LengthVec4(A);
/* NOTE(kiljacken): We need a zero check to not divide-by-zero */
if (VectorLength != 0.0f)
{
float Multiplier = 1.0f / VectorLength;
#ifdef HANDMADE_MATH__USE_SSE
__m128 SSEMultiplier = _mm_set1_ps(Multiplier);
Result.InternalElementsSSE = _mm_mul_ps(A.InternalElementsSSE, SSEMultiplier);
#else
Result.X = A.X * Multiplier;
Result.Y = A.Y * Multiplier;
Result.Z = A.Z * Multiplier;
Result.W = A.W * Multiplier;
#endif
}
return (Result);
}
SSE reciprocal square root plus Newton step:
HMM_INLINE hmm_vec4 HMM_NormalizeVec4_new(hmm_vec4 A)
{
hmm_vec4 Result;
// square elements and add them together, result is in every lane
__m128 t0 = _mm_mul_ps(A.InternalElementsSSE, A.InternalElementsSSE);
__m128 t1 = _mm_add_ps(t0, _mm_shuffle_ps(t0, t0, _MM_SHUFFLE(2, 3, 0, 1)));
__m128 sq = _mm_add_ps(t1, _mm_shuffle_ps(t1, t1, _MM_SHUFFLE(0, 1, 2, 3)));
// compute reciprocal square root with Newton step for ~22bit accuracy
__m128 rLen = _mm_rsqrt_ps(sq);
__m128 half = _mm_set1_ps(0.5);
__m128 threehalf = _mm_set1_ps(1.5);
__m128 t = _mm_mul_ps(_mm_mul_ps(sq, half), _mm_mul_ps(rLen, rLen));
rLen = _mm_mul_ps(rLen, _mm_sub_ps(threehalf, t));
// multiply elements by the reciprocal of the vector length
__m128 normed = _mm_mul_ps(A.InternalElementsSSE, rLen);
// normalize zero-vector to zero, not to NaN
__m128 zero = _mm_setzero_ps();
Result.InternalElementsSSE = _mm_andnot_ps(_mm_cmpeq_ps(A.InternalElementsSSE, zero), normed);
return (Result);
}
SSE reciprocal square root without Newton step:
HMM_INLINE hmm_vec4 HMM_NormalizeVec4_lowacc(hmm_vec4 A)
{
hmm_vec4 Result;
// square elements and add them together, result is in every lane
__m128 t0 = _mm_mul_ps(A.InternalElementsSSE, A.InternalElementsSSE);
__m128 t1 = _mm_add_ps(t0, _mm_shuffle_ps(t0, t0, _MM_SHUFFLE(2, 3, 0, 1)));
__m128 sq = _mm_add_ps(t1, _mm_shuffle_ps(t1, t1, _MM_SHUFFLE(0, 1, 2, 3)));
// compute reciprocal square root without Newton step for ~12bit accuracy
__m128 rLen = _mm_rsqrt_ps(sq);
// multiply elements by the reciprocal of the vector length
__m128 normed = _mm_mul_ps(A.InternalElementsSSE, rLen);
// normalize zero-vector to zero, not to NaN
__m128 zero = _mm_setzero_ps();
Result.InternalElementsSSE = _mm_andnot_ps(_mm_cmpeq_ps(A.InternalElementsSSE, zero), normed);
return (Result);
}
(quick-bench)
As you can see I measured throughput and latency separately, and the distinction mattered a lot. Reciprocal square root with a Newton step takes a long time, about as long as using a normal square root, but can be processed at a higher throughput. Without Newton step, a single vector-normalize operation takes less time from start to finish too, and the throughput becomes even better than before. Anyway this should demonstrate that there is some point to doing something about your square roots.
By the way the above code is not meant to be Good Practice, that would be normalizing 4 vectors simultaneously, so as to not waste a 4-wide SIMD operation on calculating a single (reciprocal) square root. That's not really the issue here though.
For fun and profit?
Based on your question, there is no reason to do so, but if you want to learn a language, it is recommended to solve mathematical problems, because they rely on integers/floats (which are primitives in (mostly) any language) and the algorithms are well documented.
In "real" code, one should use the provided methods by libc as long as you have one. Embedded platforms usually lack of a libc or roll their own, so you have to implement your own.
The point of Carmack’s technique was to extract better performance from integer operations than could be got from floating-point operations in the 1990’s. Since that time, floating point performance has massively improved! As you can see in your own benchmarks. There would be no practical reason to use that technique in new code unless you faced a similar constraint in your hardware, which the i7 doesn’t.
Limitation of standard library usage due to scope of project
What is the point implementing custom math functions in C++ (like SQRT)?
In addition to what has already been mentioned in the other answers, the choice of a project implementing their own (custom) math functions could be due to:
Limitations placed on the project, e.g. due to some standard, to not make use of the math library shipped with the compiler.
One example could be an ASIL classified project adhering to the ISO 26262 Standard, say, making use of a compiler that provides adequate qualification w.r.t. correctly compiling the project's source code, but that does not provide adequate qualification for the shipped standard library, where e.g. the math library could be linked in only by object and not source code (for the latter, appropriate tests and source code qualification could be written by the project themselves).
Related
I am not sure if it's more math or more programming question. If it's math please tell me.
I know there is a lot of ready to use for free FFT projects. But I try to understand FFT method. Just for fun and for studying it. So I made both algorithms - DFT and FFT, to compare them.
But I have problem with my FFT. It seems there is not big difference in efficiency. My FFT is only little bit faster then DFT (in some cases it's two times faster, but it's max acceleration)
In most articles about FFT, there is something about bit reversal. But I don't see the reason to use bit reversing. Probably it's the case. I don't understand it. Please help me. What I do wrong?
This is my code (you can copy it here and see how it works - online compiler):
#include <complex>
#include <iostream>
#include <math.h>
#include <cmath>
#include <vector>
#include <chrono>
#include <ctime>
float _Pi = 3.14159265;
float sampleRate = 44100;
float resolution = 4;
float _SRrange = sampleRate / resolution; // I devide Sample Rate to make the loop smaller,
//just to perform tests faster
float bufferSize = 512;
// Clock class is for measure time to execute whole loop:
class Clock
{
public:
Clock() { start = std::chrono::high_resolution_clock::now(); }
~Clock() {}
float secondsElapsed()
{
auto stop = std::chrono::high_resolution_clock::now();
return std::chrono::duration_cast<std::chrono::microseconds>(stop - start).count();
}
void reset() { start = std::chrono::high_resolution_clock::now(); }
private:
std::chrono::time_point<std::chrono::high_resolution_clock> start;
};
// Function to calculate magnitude of complex number:
float _mag_Hf(std::complex<float> sf);
// Function to calculate exp(-j*2*PI*n*k / sampleRate) - where "j" is imaginary number:
std::complex<float> _Wnk_Nc(float n, float k);
// Function to calculate exp(-j*2*PI*k / sampleRate):
std::complex<float> _Wk_Nc(float k);
int main() {
float scaleFFT = 512; // devide and conquere - if it's "1" then whole algorhitm is just simply DFT
// I wonder what is the maximum of that value. I alvays thought it should be equal to
// buffer size (number o samples) but above some value it start to work slower then DFT
std::vector<float> inputSignal; // array of input signal
inputSignal.resize(bufferSize); // how many sample we will use to calculate Fourier Transform
std::vector<std::complex<float>> _Sf; // array to store Fourier Transform value for each measured frequency bin
_Sf.resize(scaleFFT); // resize it to size which we need.
std::vector<std::complex<float>> _Hf_Db_vect; //array to store magnitude (in logarythmic dB scale)
//for each measured frequency bin
_Hf_Db_vect.resize(_SRrange); //resize it to make it able to store value for each measured freq value
std::complex<float> _Sf_I_half; // complex to calculate first half of freq range
// from 1 to Nyquist (sampleRate/2)
std::complex<float> _Sf_II_half; // complex to calculate second half of freq range
//from Nyquist to sampleRate
for(int i=0; i<(int)_Sf.size(); i++)
inputSignal[i] = cosf((float)i/_Pi); // fill the input signal with some data, no matter
Clock _time; // Start measure time
for(int freqBinK=0; freqBinK < _SRrange/2; freqBinK++) // start calculate all freq (devide by 2 for two halves)
{
for(int i=0; i<(int)_Sf.size(); i++) _Sf[i] = 0.0f; // clean all values, for next loop we need all values to be zero
for (int n=0; n<bufferSize/_Sf.size(); ++n) // Here I take all samples in buffer
{
std::complex<float> _W = _Wnk_Nc(_Sf.size()*(float)n, freqBinK);
for(int i=0; i<(int)_Sf.size(); i++) // Finally here is my devide and conquer
_Sf[i] += inputSignal[_Sf.size()*n +i] * _W; // And I see no reason to use any bit reversal, how it shoul be????
}
std::complex<float> _Wk = _Wk_Nc(freqBinK);
_Sf_I_half = 0.0f;
_Sf_II_half = 0.0f;
for(int z=0; z<(int)_Sf.size()/2; z++) // here I calculate Fourier transform for each freq
{
_Sf_I_half += _Wk_Nc(2.0f * (float)z * freqBinK) * (_Sf[2*z] + _Wk * _Sf[2*z+1]); // First half - to Nyquist
_Sf_II_half += _Wk_Nc(2.0f * (float)z *freqBinK) * (_Sf[2*z] - _Wk * _Sf[2*z+1]); // Second half - to SampleRate
// also don't see need to use reversal bit, where it shoul be??? :)
}
// Calculate magnitude in dB scale
_Hf_Db_vect[freqBinK] = _mag_Hf(_Sf_I_half); // First half
_Hf_Db_vect[freqBinK + _SRrange/2] = _mag_Hf(_Sf_II_half); // Second half
}
std::cout << _time.secondsElapsed() << std::endl; // time measuer after execution of whole loop
}
float _mag_Hf(std::complex<float> sf)
{
float _Re_2;
float _Im_2;
_Re_2 = sf.real() * sf.real();
_Im_2 = sf.imag() * sf.imag();
return 20*log10(pow(_Re_2 + _Im_2, 0.5f)); //transform magnitude to logarhytmic dB scale
}
std::complex<float> _Wnk_Nc(float n, float k)
{
std::complex<float> _Wnk_Ncomp;
_Wnk_Ncomp.real(cosf(-2.0f * _Pi * (float)n * k / sampleRate));
_Wnk_Ncomp.imag(sinf(-2.0f * _Pi * (float)n * k / sampleRate));
return _Wnk_Ncomp;
}
std::complex<float> _Wk_Nc(float k)
{
std::complex<float> _Wk_Ncomp;
_Wk_Ncomp.real(cosf(-2.0f * _Pi * k / sampleRate));
_Wk_Ncomp.imag(sinf(-2.0f * _Pi * k / sampleRate));
return _Wk_Ncomp;
}
One huge mistake you are making is calculating the butterfly weights (which involves sin and cos) on the fly (in _Wnk_Nc()). sin and cos typically cost 10s to 100s of clock cycles, whereas the other butterfly operations are just mul and add, which only take a few cycles, hence the need to factor these out. All fast FFT implementations do this as part of an initialisation step (usually called "plan creation" or similar). See e.g. FFTW and KissFFT.
apart of abovementioned "pre-calculating butterfly weights" optimization, most FFT implementations also use SIMD instructions to vectorize code.
// also don't see need to use reversal bit, where it shoul be?
The very first butterfly loop should be reverse-bit indexed. Those indexes are usually calculated inside recursion, but for loop solution calculating those indexes is also costly, so it's better to pre-calculate them in plan as well.
Combining those optimization approaches result in approximately 100x speedup
Most fast FFT implementations either use a lookup table of precomputed twiddle factors, or a simple recursion to rotate the twiddle factors on the fly, instead of calling trigonometric math library functions inside the FFT inner loop.
For large FFTs, using a trig recursion formula is less likely to thrash the data caches on contemporary processors.
for signal processing I need to compute relatively large C arrays as shown in the code part below. This is working fine so far, unfortunately, the implementation is slow. The size of "calibdata" is arround 150k and needs to be calculated for different frequencies/phases. Is there a way to improve speed significantly? Doing the same with logical indexing in MATLAB is way faster.
What I tried already:
using taylor approximation of sine: no siginificant improvement.
using std::vector, also no siginificant improvement.
code:
double phase_func(double* calibdata, long size, double* freqscale, double fs, double phase, int currentcarrier){
for (int i = 0; i < size; i++)
result += calibdata[i] * cos((2 * PI*freqscale[currentcarrier] * i / fs) + (phase*(PI / 180) - (PI / 2)));
result = fabs(result / size);
return result;}
Best regards,
Thomas
When optimizing code for speed, step 1 is to enable compiler optimizations. I hope you've done that already.
Step 2 is to profile the code and see exactly how the time is being spent. Without profiling, you're just guessing, and you could end up trying to optimize the wrong thing.
For example, your guess seems to be that the cos function is the bottleneck. But the other possibility is that the calculation of the angle is the bottleneck. Here's how I would refactor the code to reduce the time spent calculating the angle.
double phase_func(double* calibdata, long size, double* freqscale, double fs, double phase, int currentcarrier)
{
double result = 0;
double angle = phase * (PI / 180) - (PI / 2);
double delta = 2 * PI * freqscale[currentcarrier] / fs;
for (int i = 0; i < size; i++)
{
result += calibdata[i] * cos( angle );
angle += delta;
}
return fabs(result / size);
}
Okay, I'm probably going to get flogged for this answer, but I would use the GPU for this. Because your array doesn't appear to be self-referential, the best speedup you're going to get for large arrays is through parallelization... by far. I don't use MATLAB, but I just did a quick search for GPU utilization on the MathWorks site:
http://www.mathworks.com/company/newsletters/articles/gpu-programming-in-matlab.html?requestedDomain=www.mathworks.com
Outside of MATLAB you could use OpenCL or CUDA yourself.
Your enemies in execution time are:
Division
Function calls (including implicit ones in loops)
Accessing data from diffent areas
Operating dissimilar instructions
You should research on Data Driving programming and using the data cache effectively.
Division
Whether with hardware support or software support division takes a long time by its very nature. Eliminate if possibly by changing the numeric base or factoring out of the loop (if possible).
Function Calls
The most efficient method of execution is sequential. Processors are optimized for this. A branch may require the processor perform some additional calculation (branch prediction) or reloading of the instruction cache / pipeline. A waste of time (that could be spent executing data instructions).
The optimization for this is to use techniques like loop unrolling and inlining of small functions. Also reduce the quantity of branches by simplifying expressions and using Boolean algebra.
Accessing data from different areas
Modern processors are optimized to operate on local data (data in one area). One example is loading an internal cache with data. Specifically, loading a cache line with data. For example, if the data from your arrays is in one location and the cosine data in another, this may cause the data cache to be reloaded, again wasting time.
A better solution is to place all data contiguously or to contiguously access all the data. Rather than making many discontiguous accesses to the cosine table, look up a batch of cosine values sequentially (without any other data accesses between).
Dissimilar Instructions
Modern processors are more efficient at processing a batch of similar instructions. For example the pattern load, add, store is more efficient for blocks when all the loading is performed, then all adding, then all storing.
Summary
Here's an example:
register double result = 0.0;
register unsigned int i = 0U;
for (i = 0; i < size; i += 2)
{
register double cos_angle1 = /* ... */;
register double cos_angle2 = /* ... */;
result += calibdata[i + 0] * cos_angle1;
result += calibdata[i + 1] * cos_angle2;
}
The above loop is unrolled and like operations are performed in groups.
Although the keyword register may be deprecated, it is a suggestion to the compiler to use dedicated registers (if possible).
You can try to use the definition of cosine based on the complex exponential:
where j^2=-1.
Store exp((2 * PI*freqscale[currentcarrier] / fs)*j) and exp(phase*j). Evaluating cos(...) then resumes to a couple of products and additions in the for loops, and sin(), cos() and exp() are only called a couple of times.
Here goes the implementation:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <complex.h>
#include <time.h>
#define PI 3.141592653589
typedef struct cos_plan{
double complex* expo;
int size;
}cos_plan;
double phase_func(double* calibdata, long size, double* freqscale, double fs, double phase, int currentcarrier){
double result=0; //initialization
for (int i = 0; i < size; i++){
result += calibdata[i] * cos ( (2 * PI*freqscale[currentcarrier] * i / fs) + (phase*(PI / 180.) - (PI / 2.)) );
//printf("i %d cos %g\n",i,cos ( (2 * PI*freqscale[currentcarrier] * i / fs) + (phase*(PI / 180.) - (PI / 2.)) ));
}
result = fabs(result / size);
return result;
}
double phase_func2(double* calibdata, long size, double* freqscale, double fs, double phase, int currentcarrier, cos_plan* plan){
//first, let's compute the exponentials:
//double complex phaseexp=cos(phase*(PI / 180.) - (PI / 2.))+sin(phase*(PI / 180.) - (PI / 2.))*I;
//double complex phaseexpm=conj(phaseexp);
double phasesin=sin(phase*(PI / 180.) - (PI / 2.));
double phasecos=cos(phase*(PI / 180.) - (PI / 2.));
if (plan->size<size){
double complex *tmp=realloc(plan->expo,size*sizeof(double complex));
if(tmp==NULL){fprintf(stderr,"realloc failed\n");exit(1);}
plan->expo=tmp;
plan->size=size;
}
plan->expo[0]=1;
//plan->expo[1]=exp(2 *I* PI*freqscale[currentcarrier]/fs);
plan->expo[1]=cos(2 * PI*freqscale[currentcarrier]/fs)+sin(2 * PI*freqscale[currentcarrier]/fs)*I;
//printf("%g %g\n",creall(plan->expo[1]),cimagl(plan->expo[1]));
for(int i=2;i<size;i++){
if(i%2==0){
plan->expo[i]=plan->expo[i/2]*plan->expo[i/2];
}else{
plan->expo[i]=plan->expo[i/2]*plan->expo[i/2+1];
}
}
//computing the result
double result=0; //initialization
for(int i=0;i<size;i++){
//double coss=0.5*creall(plan->expo[i]*phaseexp+conj(plan->expo[i])*phaseexpm);
double coss=creall(plan->expo[i])*phasecos-cimagl(plan->expo[i])*phasesin;
//printf("i %d cos %g\n",i,coss);
result+=calibdata[i] *coss;
}
result = fabs(result / size);
return result;
}
int main(){
//the parameters
long n=100000000;
double* calibdata=malloc(n*sizeof(double));
if(calibdata==NULL){fprintf(stderr,"malloc failed\n");exit(1);}
int freqnb=42;
double* freqscale=malloc(freqnb*sizeof(double));
if(freqscale==NULL){fprintf(stderr,"malloc failed\n");exit(1);}
for (int i = 0; i < freqnb; i++){
freqscale[i]=i*i*0.007+i;
}
double fs=n;
double phase=0.05;
//populate calibdata
for (int i = 0; i < n; i++){
calibdata[i]=i/((double)n);
calibdata[i]=calibdata[i]*calibdata[i]-calibdata[i]+0.007/(calibdata[i]+3.0);
}
//call to sample code
clock_t t;
t = clock();
double res=phase_func(calibdata,n, freqscale, fs, phase, 13);
t = clock() - t;
printf("first call got %g in %g seconds.\n",res,((float)t)/CLOCKS_PER_SEC);
//initialize
cos_plan plan;
plan.expo=malloc(n*sizeof(double complex));
plan.size=n;
t = clock();
res=phase_func2(calibdata,n, freqscale, fs, phase, 13,&plan);
t = clock() - t;
printf("second call got %g in %g seconds.\n",res,((float)t)/CLOCKS_PER_SEC);
//cleaning
free(plan.expo);
free(calibdata);
free(freqscale);
return 0;
}
Compile with gcc main.c -o main -std=c99 -lm -Wall -O3. Using the code you provided, it take 8 seconds with size=100000000 on my computer while the execution time of the proposed solution takes 1.5 seconds... It is not so impressive, but it is not negligeable.
The solution that is presented does not involve any call to cos of sin in the for loops. Indeed, there are only multiplications and additions. The bottleneck is either the memory bandwidth or the tests and access to memory in the exponentiation by squaring (most likely first issue, since i add to use an additional array of complex).
For complex number in c, see:
How to work with complex numbers in C?
Computing e^(-j) in C
If the problem is memory bandwidth, then parallelism is required... and directly computing cos would be easier. Additional simplifications coud have be performed if freqscale[currentcarrier] / fs were an integer. Your problem is really close to the computation of Discrete Cosine Transform, the present trick is close to the Discrete Fourier Transform and the FFTW library is really good at computing these transforms.
Notice that the present code can produce innacurate results due to loss of significance : result can be much larger than cos(...)*calibdata[] when size is large. Using partial sums can resolve the issue.
Simple trig identity to eliminate the - (PI / 2). This is also more accurate than attempting the subtraction which uses machine_PI. This is important when values are near π/2.
cosine(x - π/2) == -sine(x)
Use of const and restrict: Good compilers can perform more optimizations with this knowledge. (See also #user3528438)
// double phase_func(double* calibdata, long size,
// double* freqscale, double fs, double phase, int currentcarrier) {
double phase_func(const double* restrict calibdata, long size,
const double* restrict freqscale, double fs, double phase, int currentcarrier) {
Some platforms perform faster calculations with float vs double with a tolerable loss of precision. YMMV. Profile code both ways.
// result += calibdata[i] * cos(...
result += calibdata[i] * cosf(...
Minimize recalculations.
double angle_delta = ...;
double angle_current = ...;
for (int i = 0; i < size; i++) {
result += calibdata[i] * cos(angle_current);
angle_current += angle_delta;
}
Unclear why code uses long size and and int currentcarrier. I'd expect the same type and to use type size_t. This is idiomatic for array indexing. #Daniel Jour
Reversing loops can allow a compare to 0 rather than compare to variable. Sometimes a modest performance gain.
Insure compiler optimizations are well enabled.
All together
double phase_func2(const double* restrict calibdata, size_t size,
const double* restrict freqscale, double fs, double phase,
size_t currentcarrier) {
double result = 0.0;
double angle_delta = 2.0 * PI * freqscale[currentcarrier] / fs;
double angle_current = angle_delta * (size - 1) + phase * (PI / 180);
size_t i = size;
while (i) {
result -= calibdata[--i] * sinf(angle_current);
angle_current -= angle_delta;
}
result = fabs(result / size);
return result;
}
Leveraging the cores you have, without resorting to the GPU, use OpenMP. Testing with VS2015, the invariants are lifted out of the loop by the optimizer. Enabling AVX2 and OpenMP.
double phase_func3(double* calibdata, const int size, const double* freqscale,
const double fs, const double phase, const size_t currentcarrier)
{
double result{};
constexpr double PI = 3.141592653589;
#pragma omp parallel
#pragma omp for reduction(+: result)
for (int i = 0; i < size; ++i) {
result += calibdata[i] *
cos( (2 * PI*freqscale[currentcarrier] * i / fs) + (phase*(PI / 180.0) - (PI / 2.0)));
}
result = fabs(result / size);
return result;
}
The original version with AVX enabled took: ~1.4 seconds
and adding OpenMP brought it down to: ~0.51 seconds.
Pretty nice return for two pragmas and a compiler switch.
The following code does not seem to behave intuitively:
#include <random>
#include <iostream>
using namespace std;
int main()
{
mt19937 MyGenerator(40);
auto gauss = normal_distribution<double>(0,1);
auto linear = uniform_real_distribution<double>(0,1);
cout << gauss(MyGenerator) << endl; //line a
cout << linear(MyGenerator) << endl; //line b
cout << gauss(MyGenerator) << endl;
}
Running this code gives the output
-0.816097
0.705030
0.303032.
If now the order of lines a and b is swapped, the output changes to
0.644008
0.338080
-0.639501.
It is completely clear that the first two numbers are different now, as they are produced by different distributions. Nevertheless, why is the third number different?
In my intuition, the distribution should grab a number c = MyGenerator() which is then mapped to the random number in the specific range. The random number generator would point to the next number in the sequence of numbers after the distribution call. So, shouldn't the outcome of the third call be the same in both cases?
Another observation:
Adding a forth call to either of the distributions does in fact seem to reproduce the same numbers.
libstdc++'s implementation of normal_distribution uses the Marsaglia polar method. The interesting thing about this method is that each pass uses two random numbers from the URNG to generate two results.
That is, the first call to the distribution calls the URNG twice (possibly more times, as it uses rejection sampling, but an even number of times) and returns one result; the following call to the distribution will not call the URNG but will return the saved second result.
Here's an extract from the source code, slightly reformatted:
if (_M_saved_available)
{
_M_saved_available = false;
ret = _M_saved;
}
else
{
result_type x, y, r2;
do
{
x = result_type(2.0) * aurng() - 1.0;
y = result_type(2.0) * aurng() - 1.0;
r2 = x * x + y * y;
}
while (r2 > 1.0 || r2 == 0.0);
const result_type mult = std::sqrt(-2 * std::log(r2) / r2);
_M_saved = x * mult;
_M_saved_available = true;
ret = y * mult;
}
There's no requirement that the distribution call the underlying generator once for each value. Some distributions are best calculated by combining multiple random values.
For example, in the GNU implementation, the implemantation of the uniform distribution is
return (__aurng() * (__p.b() - __p.a())) + __p.a();
calling the generator __aurng once; while the core of the normal distribution is:
do
{
__x = result_type(2.0) * __aurng() - 1.0;
__y = result_type(2.0) * __aurng() - 1.0;
__r2 = __x * __x + __y * __y;
}
while (__r2 > 1.0 || __r2 == 0.0);
calling it at least twice.
I wrote this simple code which reads a length from the Sharp infrared sensor, end presents the average meter in cm (unit) by serial.
When write this code for the Arduino Mega board, the Arduino starts a blinking LED (pin 13) and the program does nothing. Where is the bug in this code?
#include <QueueList.h>
const int ANALOG_SHARP = 0; //Set pin data from sharp.
QueueList <float> queuea;
float cm;
float qu1;
float qu2;
float qu3;
float qu4;
float qu5;
void setup() {
Serial.begin(9600);
}
void loop() {
cm = read_gp2d12_range(ANALOG_SHARP); //Convert to cm (unit).
queuea.push(cm); //Add item to queue, when I add only this line Arduino crash.
if ( 5 <= queuea.peek()) {
Serial.println(average());
}
}
float read_gp2d12_range(byte pin) { //Function converting to cm (unit).
int tmp;
tmp = analogRead(pin);
if (tmp < 3)
return -1; // Invalid value.
return (6787.0 /((float)tmp - 3.0)) - 4.0;
}
float average() { //Calculate average length
qu1 += queuea.pop();
qu2 += queuea.pop();
qu3 += queuea.pop();
qu4 += queuea.pop();
qu5 += queuea.pop();
float aver = ((qu1+qu2+qu3+qu4+qu5)/5);
return aver;
}
I agree with the peek() -> count() error listed by vhallac. But I'll also point out that you should consider averaging by powers of 2 unless there is a strong case to do otherwise.
The reason is that on microcontrollers, division is slow. By averaging over a power of 2 (2,4,8,16,etc.) you can simply calculate the sum and then bitshift it.
To calculate the average of 2: (v1 + v2) >> 1
To calculate the average of 4: (v1 + v2 + v3 + v4) >> 2
To calculate the average of n values (where n is a power of 2) just right bitshift the sum right by [log2(n)].
As long as the datatype for your sum variable is big enough and won't overflow, this is much easier and much faster.
Note: this won't work for floats in general. In fact, microcontrollers aren't optimized for floats. You should consider converting from int (what I'm assuming you're ADC is reading) to float at the end after the averaging rather than before.
By converting from int to float and then averaging floats you are losing more precision than averaging ints than converting the int to a float.
Other:
You're using the += operator without initializing the variables (qu1, qu2, etc.) -- it's good practice to initialize them if you're going to use += but it looks as if = would work fine.
For floats, I'd have written the average function as:
float average(QueueList<float> & q, int n)
{
float sum = 0;
for(int i=0; i<n; i++)
{
sum += q.pop();
}
return (sum / (float) n);
}
And called it: average(queuea, 5);
You could use this to average any number of sensor readings and later use the same code to later average floats in a completely different QueueList. Passing the number of readings to average as a parameter will really come in handy in the case that you need to tweak it.
TL;DR:
Here's how I would have done it:
#include <QueueList.h>
const int ANALOG_SHARP=0; // set pin data from sharp
const int AvgPower = 2; // 1 for 2 readings, 2 for 4 readings, 3 for 8, etc.
const int AvgCount = pow(2,AvgPow);
QueueList <int> SensorReadings;
void setup(){
Serial.begin(9600);
}
void loop()
{
int reading = analogRead(ANALOG_SHARP);
SensorReadings.push(reading);
if(SensorReadings.count() > AvgCount)
{
int avg = average2(SensorReadings, AvgPower);
Serial.println(gpd12_to_cm(avg));
}
}
float gp2d12_to_cm(int reading)
{
if(reading <= 3){ return -1; }
return((6787.0 /((float)reading - 3.0)) - 4.0);
}
int average2(QueueList<int> & q, int AvgPower)
{
int AvgCount = pow(2, AvgPower);
long sum = 0;
for(int i=0; i<AvgCount; i++)
{
sum += q.pop();
}
return (sum >> AvgPower);
}
You are using queuea.peek() to obtain the count. This will only return the last element in queue. You should use queuea.count() instead.
Also you might consider changing the condition tmp < 3 to tmp <= 3. If tmp is 3, you divide by zero.
Great improvement jedwards, however the first question I have is why use queuelist instead of an int array.
As an example I would do the following:
int average(int analog_reading)
{
#define NUM_OF_AVG 5
static int readings[NUM_OF_AVG];
static int next_position;
static int sum;
if (++next_position >= NUM_OF_AVG)
{
next_position=0;
}
reading[next_position]=analog_reading;
for(int i=0; i<NUM_OF_AVG; i++)
{
sum += reading[i];
}
average = sum/NUM_OF_AVG
}
Now I compute a new rolling average with every reading and it eliminates all the issues related to dynamic memory allocation (memory fragmentation, no available memory, memory leaks) in a embedded device.
I appreciate and understand the use of shifting for a division by 2,4 or 8, however I would stay away from that technique for two reasons.
I think readability and maintainability of the source code is more important then saving a little bit of time with a shift instead of a divide unless you can test and verify the divide is a bottleneck.
Second, I believe most current optimizing compilers will do a shift if possible, I know GCC does.
I will leave refactoring out the for loop for the next guy.
I did some speed testing to figure out what is the fastest, when doing multiplication or division on numbers. I had to really work hard to defeat the optimiser. I got nonsensical results such as a massive loop operating in 2 microseconds, or that multiplication was the same speed as division (if only that were true).
After I finally worked hard enough to defeat enough of the compiler optimisations, while still letting it optimise for speed, I got these speed results. They maybe of interest to someone else?
If my test is STILL FLAWED, let me know, but be kind seeing as I just spend two hours writing this crap :P
64 time: 3826718 us
32 time: 2476484 us
D(mul) time: 936524 us
D(div) time: 3614857 us
S time: 1506020 us
"Multiplying to divide" using doubles seems the fastest way to do a division, followed by integer division. I did not test the accuracy of division. Could it be that "proper division" is more accurate? I have no desire to find out after these speed test results as I'll just be using integer division on a base 10 constant and letting my compiler optimise it for me ;) (and not defeating it's optimisations either).
Here's the code I used to get the results:
#include <iostream>
int Run(int bla, int div, int add, int minus) {
// these parameters are to force the compiler to not be able to optimise away the
// multiplications and divides :)
long LoopMax = 100000000;
uint32_t Origbla32 = 1000000000;
long i = 0;
uint32_t bla32 = Origbla32;
uint32_t div32 = div;
clock_t Time32 = clock();
for (i = 0; i < LoopMax; i++) {
div32 += add;
div32 -= minus;
bla32 = bla32 / div32;
bla32 += bla;
bla32 = bla32 * div32;
}
Time32 = clock() - Time32;
uint64_t bla64 = bla32;
clock_t Time64 = clock();
uint64_t div64 = div;
for (long i = 0; i < LoopMax; i++) {
div64 += add;
div64 -= minus;
bla64 = bla64 / div64;
bla64 += bla;
bla64 = bla64 * div64;
}
Time64 = clock() - Time64;
double blaDMul = Origbla32;
double multodiv = 1.0 / (double)div;
double multomul = div;
clock_t TimeDMul = clock();
for (i = 0; i < LoopMax; i++) {
multodiv += add;
multomul -= minus;
blaDMul = blaDMul * multodiv;
blaDMul += bla;
blaDMul = blaDMul * multomul;
}
TimeDMul = clock() - TimeDMul;
double blaDDiv = Origbla32;
clock_t TimeDDiv = clock();
for (i = 0; i < LoopMax; i++) {
multodiv += add;
multomul -= minus;
blaDDiv = blaDDiv / multomul;
blaDDiv += bla;
blaDDiv = blaDDiv / multodiv;
}
TimeDDiv = clock() - TimeDDiv;
float blaS = Origbla32;
float divS = div;
clock_t TimeS = clock();
for (i = 0; i < LoopMax; i++) {
divS += add;
divS -= minus;
blaS = blaS / divS;
blaS += bla;
blaS = blaS * divS;
}
TimeS = clock() - TimeS;
printf("64 time: %i us (%i)\n", (int)Time64, (int)bla64);
printf("32 time: %i us (%i)\n", (int)Time32, bla32);
printf("D(mul) time: %i us (%f)\n", (int)TimeDMul, blaDMul);
printf("D(div) time: %i us (%f)\n", (int)TimeDDiv, blaDDiv);
printf("S time: %i us (%f)\n", (int)TimeS, blaS);
return 0;
}
int main(int argc, char* const argv[]) {
Run(0, 10, 0, 0); // adds and minuses 0 so it doesn't affect the math, only kills the opts
return 0;
}
There are lots of ways to perform certain arithmetic, so there might not be a single answer (shifting, fractional multiplication, actual division, some round-trip through a logarithm unit, etc; these might all have different relative costs depending on the operands and resource allocation).
Let the compiler do its thing with the program and data flow information it has.
For some data applicable to assembly on x86, you might look at: "Instruction latencies and throughput for AMD and Intel x86 processors"
What is fastest will depend entirely on the target architecture. It looks here like you're interested only in the platform you happen to be on, which guessing from your execution times seems to be 64-bit x86, either Intel (Core2?) or AMD.
That said, floating-point multiplication by the inverse will be the fastest on many platforms, but is, as you speculate, usually less accurate than a floating-point divide (two roundings instead of one -- whether or not that matters for your usage is a separate question). In general, you are better off re-arranging your algorithm to use fewer divides than you are jumping through hoops to make division as efficient as possible (the fastest division is the one you don't do), and make sure to benchmark before you spend time optimizing at all, as algorithms that bottleneck on division are few and far between.
Also, if you have integer sources and need an integer result, make sure to include the cost of conversion between integer and floating-point in your benchmarking.
Since you're interested in timings on a specific machine, you should be aware that Intel now publishes this information in their Optimization Reference Manual (pdf). Specifically, you will be interested in the tables of Appendix C section 3.1, "Latency and Throughput with Register Operands".
Be aware that integer divide timings depend strongly on the actual values involved. Based on the information in that guide, it seems that your timing routines still have a fair bit of overhead, as the performance ratios you measure don't match up with Intel's published information.
As Stephen mentioned, use the optimisation manual - but you should also be considering the use of SSE instructions. These can do 4 or 8 divisions / multiplications in a single instruction.
Also, it is fairly common for a division to take a single clock cycle to process. The result may not be available for several clock cycles (called latency), however the next division can begin during this time (overlapping with the first) as long as it does not require the result from the first. This is due to pipe-lining in the CPU, in the same way as you can wash more clothes while the previous load is still drying.
Multiplying to divide is a common trick, and should be used wherever your divisor changes infrequently.
There is a very good chance that you will spend time and effort making the maths fast only to discover that it is the speed of memory access (as you navigate the input and write the output) that limits your final implimentation.
I wrote a flawed test to do this on MSVC 2008
double i32Time = GetTime();
{
volatile __int32 i = 4;
__int32 count = 0;
__int32 max = 1000000;
while( count < max )
{
i /= 61;
count++;
}
}
i32Time = GetTime() - i32Time;
double i64Time = GetTime();
{
volatile __int64 i = 4;
__int32 count = 0;
__int32 max = 1000000;
while( count < max )
{
i /= 61;
count++;
}
}
i64Time = GetTime() - i64Time;
double fTime = GetTime();
{
volatile float i = 4;
__int32 count = 0;
__int32 max = 1000000;
while( count < max )
{
i /= 4.0f;
count++;
}
}
fTime = GetTime() - fTime;
double fmTime = GetTime();
{
volatile float i = 4;
const float div = 1.0f / 4.0f;
__int32 count = 0;
__int32 max = 1000000;
while( count < max )
{
i *= div;
count++;
}
}
fmTime = GetTime() - fmTime;
double dTime = GetTime();
{
volatile double i = 4;
__int32 count = 0;
__int32 max = 1000000;
while( count < max )
{
i /= 4.0f;
count++;
}
}
dTime = GetTime() - dTime;
double dmTime = GetTime();
{
volatile double i = 4;
const double div = 1.0f / 4.0f;
__int32 count = 0;
__int32 max = 1000000;
while( count < max )
{
i *= div;
count++;
}
}
dmTime = GetTime() - dmTime;
DebugOutput( _T( "%f\n" ), i32Time );
DebugOutput( _T( "%f\n" ), i64Time );
DebugOutput( _T( "%f\n" ), fTime );
DebugOutput( _T( "%f\n" ), fmTime );
DebugOutput( _T( "%f\n" ), dTime );
DebugOutput( _T( "%f\n" ), dmTime );
DebugBreak();
I then ran it on an AMD64 Turion 64 in 32-bit mode. The results I got were as follows:
0.006622
0.054654
0.006283
0.006353
0.006203
0.006161
The reason the test is flawed is the usage of volatile which forces the compiler to re-load the variable from memory just in case its changed. All in it show there is precious little difference between any of the implementations on this machine (__int64 is obviously slow).
It also categorically shows that the MSVC compiler performs the multiply by reciprocal optimisation. I imagine GCC does the same if not better. If i change the float and double division checks to divide by "i" then it increases the time significantly. Though, while a lot of that could be the re-loading from disk, it is obvious the compiler can't optimise that away so easily.
To understand such micro-optimisations try reading this pdf.
All in I'd argue that if you are worrying about such things you obviously haven't profiled your code. Profile and fix the problems as and when they actually ARE a problem.
Agner Fog has done some pretty detailed measurements himself, which can be found here. If you're really trying to optimize stuff, you should read the rest of the documents from his software optimization resources as well.
I would point out that, even if you are measuring non-vectorized floating point operations, the compiler has two options for the generated assembly: it can use the FPU instructions (fadd, fmul) or it can use SSE instructions while still manipulate one floating point value per instruction (addss, mulss). In my experience the SSE instructions are faster and have less inaccuracies, but compilers don't make it the default because it could break compatibility with code that relies on the old behavior. You can turn it on in gcc with the -mfpmath=sse flag.