Eigen: simplifying expression with Eigen intrinsics - c++

I'm trying to scale all the columns in a matrix with a corresponding value from a vector. Where this value is 0, I want to replace that column with a column from an other matrix scaled by a constant. Sounds complicated, but in Matlab it's pretty simple (but probably not fully optimized):
a(:,b ~= 0) = a(:,b ~= 0)./b(b ~= 0);
a(:,b == 0) = c(:,b == 0)*x;
doing it with a for loop in C++ would also be pretty simple:
RowVectorXf b;
Matrix3Xf a, c;
float x;
for (int i = 0; i < b.size(); i++) {
if (b(i) != 0) {
a.col(i) = a.col(i) / b(i);
} else {
a.col(i) = c.col(i) * x;
}
}
Is there a possibility to do this operation (faster) with Eigen intrinsics such as colwise and select?
p.s. I tried to shorten the if condition to the form
a.col(i) = (b(i) != 0) ? (a.col(i) / b(i)) : (c.col(i) * x);
But this does not compile with the error error: operands to ?: have different types ...(long listing of the types)
Edit:
I added the code for testing the answers, here it is:
#include <Eigen/Dense>
#include <stdlib.h>
#include <chrono>
#include <iostream>
using namespace std;
using namespace Eigen;
void flushCache()
{
const int size = 20 * 1024 * 1024; // Allocate 20M. Set much larger than L2
volatile char *c = (char *) malloc(size);
volatile int i = 8;
for (volatile int j = 0; j < size; j++)
c[j] = i * j;
free((void*) c);
}
int main()
{
Matrix3Xf a(3, 1000000);
RowVectorXf b(1000000);
Matrix3Xf c(3, 1000000);
float x = 0.4;
a.setRandom();
b.setRandom();
c.setRandom();
for (int testNumber = 0; testNumber < 4; testNumber++) {
flushCache();
chrono::high_resolution_clock::time_point t1 = chrono::high_resolution_clock::now();
for (int repetition = 0; repetition < 1000; repetition++) {
switch (testNumber) {
case 0:
for (int i = 0; i < b.size(); i++) {
if (b(i) != 0) {
a.col(i) = a.col(i) / b(i);
} else {
a.col(i) = c.col(i) * x;
}
}
break;
case 1:
for (int i = 0; i < b.size(); i++) {
a.col(i) = (b(i) != 0) ? (a.col(i) / b(i)).eval() : (c.col(i) * x).eval();
}
break;
case 2:
for (int i = 0; i < b.size(); i++) {
a.col(i) = (b(i) != 0) ? (a.col(i) * (1.0f / b(i))) : (c.col(i) * x);
}
break;
case 3:
a = b.cwiseEqual(0.0f).replicate< 3, 1 >().select(c * x, a.cwiseQuotient(b.replicate< 3, 1 >()));
break;
default:
break;
}
}
chrono::high_resolution_clock::time_point t2 = chrono::high_resolution_clock::now();
auto duration = chrono::duration_cast< chrono::milliseconds >(t2 - t1).count();
cout << "duration: " << duration << "ms" << endl;
}
return 0;
}
Sample output is:
duration: 14391ms
duration: 15219ms
duration: 9148ms
duration: 13513ms
By the way, not using setRandom to init the variables, the output is totally different:
duration: 10255ms
duration: 11076ms
duration: 8250ms
duration: 5198ms
#chtz suggests it's because of denormalized values, but I think it's because of branch prediction. An evidance that it's because of branch prediction is, that initializing b.setZero(); leads to the same timings as not initializing.

a.col(i) = (b(i) != 0) ? (a.col(i) * (1.0f/b(i))) : (c.col(i) * x);
would work but only because the expressions would be of the same type, and it will likely not safe any time (a ? : expression is essentially translated to the same as an if-else branch.)
If you prefer writing it into one line, the following expression should work:
a = b.cwiseEqual(0.0f).replicate<3,1>().select(c*x, a.cwiseQuotient(b.replicate<3,1>()));
Again, I doubt it will make any significant performance difference.

Related

Fastest way to get square root in float value

I am trying to find a fastest way to make square root of any float number in C++. I am using this type of function in a huge particles movement calculation like calculation distance between two particle, we need a square root etc. So If any suggestion it will be very helpful.
I have tried and below is my code
#include <math.h>
#include <iostream>
#include <chrono>
using namespace std;
using namespace std::chrono;
#define CHECK_RANGE 100
inline float msqrt(float a)
{
int i;
for (i = 0;i * i <= a;i++);
float lb = i - 1; //lower bound
if (lb * lb == a)
return lb;
float ub = lb + 1; // upper bound
float pub = ub; // previous upper bound
for (int j = 0;j <= 20;j++)
{
float ub2 = ub * ub;
if (ub2 > a)
{
pub = ub;
ub = (lb + ub) / 2; // mid value of lower and upper bound
}
else
{
lb = ub;
ub = pub;
}
}
return ub;
}
void check_msqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
msqrt(i);
}
}
void check_sqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
sqrt(i);
}
}
int main()
{
auto start1 = high_resolution_clock::now();
check_msqrt();
auto stop1 = high_resolution_clock::now();
auto duration1 = duration_cast<microseconds>(stop1 - start1);
cout << "Time for check_msqrt = " << duration1.count() << " micro secs\n";
auto start2 = high_resolution_clock::now();
check_sqrt();
auto stop2 = high_resolution_clock::now();
auto duration2 = duration_cast<microseconds>(stop2 - start2);
cout << "Time for check_sqrt = " << duration2.count() << " micro secs";
//cout << msqrt(3);
return 0;
}
output of above code showing the implemented method 4 times more slow than sqrt of math.h file.
I need faster than math.h version.
In short, I do not think it is possible to implement something generally faster than the standard library version of sqrt.
Performance is a very important parameter when implementing standard library functions and it is fair to assume that such a commonly used function as sqrt is optimized as much as possible.
Beating the standard library function would require a special case, such as:
Availability of a suitable assembler instruction - or other specialized hardware support - on the particular system for which the standard library has not been specialized.
Knowledge of the needed range or precision. The standard library function must handle all cases specified by the standard. If the application only needs a subset of that or maybe only requires an approximate result then perhaps an optimization is possible.
Making a mathematical reduction of the calculations or combine some calculation steps in a smart way so an efficient implementation can be made for that combination.
Here's another alternative to binary search. It may not be as fast as std::sqrt, haven't tested it. But it will definitely be faster than your binary search.
auto
Sqrt(float x)
{
using F = decltype(x);
if (x == 0 || x == INFINITY || isnan(x))
return x;
if (x < 0)
return F{NAN};
int e;
x = std::frexp(x, &e);
if (e % 2 != 0)
{
++e;
x /= 2;
}
auto y = (F{-160}/567*x + F{2'848}/2'835)*x + F{155}/567;
y = (y + x/y)/2;
y = (y + x/y)/2;
return std::ldexp(y, e/2);
}
After getting +/-0, nan, inf, and negatives out of the way, it works by decomposing the float into a mantissa in the range of [1/4, 1) times 2e where e is an even integer. The answer is then sqrt(mantissa)* 2e/2.
Finding the sqrt of the mantissa can be guessed at with a least squares quadratic curve fit in the range [1/4, 1]. Then that good guess is refined by two iterations of Newton–Raphson. This will get you within 1 ulp of the correctly rounded result. A good std::sqrt will typically get that last bit correct.
I have also tried with the algorithm mention in https://en.wikipedia.org/wiki/Fast_inverse_square_root, but not found desired result, please check
#include <math.h>
#include <iostream>
#include <chrono>
#include <bit>
#include <limits>
#include <cstdint>
using namespace std;
using namespace std::chrono;
#define CHECK_RANGE 10000
inline float msqrt(float a)
{
int i;
for (i = 0;i * i <= a;i++);
float lb = i - 1; //lower bound
if (lb * lb == a)
return lb;
float ub = lb + 1; // upper bound
float pub = ub; // previous upper bound
for (int j = 0;j <= 20;j++)
{
float ub2 = ub * ub;
if (ub2 > a)
{
pub = ub;
ub = (lb + ub) / 2; // mid value of lower and upper bound
}
else
{
lb = ub;
ub = pub;
}
}
return ub;
}
/* mentioned here -> https://en.wikipedia.org/wiki/Fast_inverse_square_root */
inline float Q_sqrt(float number)
{
union Conv {
float f;
uint32_t i;
};
Conv conv;
conv.f= number;
conv.i = 0x5f3759df - (conv.i >> 1);
conv.f *= 1.5F - (number * 0.5F * conv.f * conv.f);
return 1/conv.f;
}
void check_Qsqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
Q_sqrt(i);
}
}
void check_msqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
msqrt(i);
}
}
void check_sqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
sqrt(i);
}
}
int main()
{
auto start1 = high_resolution_clock::now();
check_msqrt();
auto stop1 = high_resolution_clock::now();
auto duration1 = duration_cast<microseconds>(stop1 - start1);
cout << "Time for check_msqrt = " << duration1.count() << " micro secs\n";
auto start2 = high_resolution_clock::now();
check_sqrt();
auto stop2 = high_resolution_clock::now();
auto duration2 = duration_cast<microseconds>(stop2 - start2);
cout << "Time for check_sqrt = " << duration2.count() << " micro secs\n";
auto start3 = high_resolution_clock::now();
check_Qsqrt();
auto stop3 = high_resolution_clock::now();
auto duration3 = duration_cast<microseconds>(stop3 - start3);
cout << "Time for check_Qsqrt = " << duration3.count() << " micro secs\n";
//cout << Q_sqrt(3);
//cout << sqrt(3);
//cout << msqrt(3);
return 0;
}

Matrix inversion slower using threads

I made a function that makes the inverse and then another multithreaded, as long I have to make inverse of arrays >2000 x 2000.
A 1000x1000 array unthreated takes 2.5 seconds (on a i5-4460 4 cores 2.9ghz)
and multithreaded takes 7.25 seconds
I placed the multithreads in the part that most time consumption is taken. Whai is wrong?
Is due vectors are used instead of 2 dimensions arrays?
This is the minimum code to test both versions:
#include<iostream>
#include <vector>
#include <stdlib.h>
#include <time.h>
#include <chrono>
#include <thread>
const int NUCLEOS = 8;
#ifdef __linux__
#include <unistd.h> //usleep()
typedef std::chrono::system_clock t_clock; //try to use high_resolution_clock on new linux x64 computer!
#else
typedef std::chrono::high_resolution_clock t_clock;
#pragma warning(disable:4996)
#endif
using namespace std;
std::chrono::time_point<t_clock> start_time, stop_time = start_time; char null_char = '\0';
void timer(char *title = 0, int data_size = 1) { stop_time = t_clock::now(); double us = (double)chrono::duration_cast<chrono::microseconds>(stop_time - start_time).count(); if (title) printf("%s time = %7lgms = %7lg MOPs\n", title, (double)us*1e-3, (double)data_size / us); start_time = t_clock::now(); }
//makes columns 0
void colum_zero(vector< vector<double> > &x, vector< vector<double> > &y, int pos0, int pos1,int dim, int ord);
//returns inverse of x, x is not modified, not threaded
vector< vector<double> > inverse(vector< vector<double> > x)
{
if (x.size() != x[0].size())
{
cout << "ERROR on inverse() not square array" << endl; getchar(); return{};//returns a null
}
size_t dim = x.size();
int i, j, ord;
vector< vector<double> > y(dim,vector<double>(dim,0));//initializes output = 0
//init_2Dvector(y, dim, dim);
//1. Unity array y:
for (i = 0; i < dim; i++)
{
y[i][i] = 1.0;
}
double diagon, coef;
double *ptrx, *ptry, *ptrx2, *ptry2;
for (ord = 0; ord<dim; ord++)
{
//2 Hacemos diagonal de x =1
int i2;
if (fabs(x[ord][ord])<1e-15) //If that element is 0, a line that contains a non zero is added
{
for (i2 = ord + 1; i2<dim; i2++)
{
if (fabs(x[i2][ord])>1e-15) break;
}
if (i2 >= dim)
return{};//error, returns null
for (i = 0; i<dim; i++)//added a line without 0
{
x[ord][i] += x[i2][i];
y[ord][i] += y[i2][i];
}
}
diagon = 1.0/x[ord][ord];
ptry = &y[ord][0];
ptrx = &x[ord][0];
for (i = 0; i < dim; i++)
{
*ptry++ *= diagon;
*ptrx++ *= diagon;
}
//uses the same function but not threaded:
colum_zero(x,y,0,dim,dim,ord);
}//end ord
return y;
}
//threaded version
vector< vector<double> > inverse_th(vector< vector<double> > x)
{
if (x.size() != x[0].size())
{
cout << "ERROR on inverse() not square array" << endl; getchar(); return{};//returns a null
}
int dim = (int) x.size();
int i, ord;
vector< vector<double> > y(dim, vector<double>(dim, 0));//initializes output = 0
//init_2Dvector(y, dim, dim);
//1. Unity array y:
for (i = 0; i < dim; i++)
{
y[i][i] = 1.0;
}
std::thread tarea[NUCLEOS];
double diagon;
double *ptrx, *ptry;// , *ptrx2, *ptry2;
for (ord = 0; ord<dim; ord++)
{
//2 Hacemos diagonal de x =1
int i2;
if (fabs(x[ord][ord])<1e-15) //If a diagonal element=0 it is added a column that is not 0 the diagonal element
{
for (i2 = ord + 1; i2<dim; i2++)
{
if (fabs(x[i2][ord])>1e-15) break;
}
if (i2 >= dim)
return{};//error, returns null
for (i = 0; i<dim; i++)//It is looked for a line without zero to be added to make the number a non zero one to avoid later divide by 0
{
x[ord][i] += x[i2][i];
y[ord][i] += y[i2][i];
}
}
diagon = 1.0 / x[ord][ord];
ptry = &y[ord][0];
ptrx = &x[ord][0];
for (i = 0; i < dim; i++)
{
*ptry++ *= diagon;
*ptrx++ *= diagon;
}
int pos0 = 0, N1 = dim;//initial array position
if ((N1<1) || (N1>5000))
{
cout << "It is detected out than 1-5000 simulations points=" << N1 << " ABORT or press enter to continue" << endl; getchar();
}
//cout << "Initiation of " << NUCLEOS << " threads" << endl;
for (int thread = 0; thread<NUCLEOS; thread++)
{
int pos1 = (int)((thread + 1)*N1 / NUCLEOS);//next position
tarea[thread] = std::thread(colum_zero, std::ref(x), std::ref(y), pos0, pos1, dim, ord);//ojo, coil current=1!!!!!!!!!!!!!!!!!!
pos0 = pos1;//next thread will work at next point
}
for (int thread = 0; thread<NUCLEOS; thread++)
{
tarea[thread].join();
//cout << "Thread num: " << thread << " end\n";
}
}//end ord
return y;
}
//makes columns 0
void colum_zero(vector< vector<double> > &x, vector< vector<double> > &y, int pos0, int pos1,int dim, int ord)
{
double coef;
double *ptrx, *ptry, *ptrx2, *ptry2;
//Hacemos '0' la columna ord salvo elemento diagonal:
for (int i = pos0; i<pos1; i++)//Begin to end for every thread
{
if (i == ord) continue;
coef = x[i][ord];//element to make 0
if (fabs(coef)<1e-15) continue; //If already zero, it is avoided
ptry = &y[i][0];
ptry2 = &y[ord][0];
ptrx = &x[i][0];
ptrx2 = &x[ord][0];
for (int j = 0; j < dim; j++)
{
*ptry++ = *ptry - coef * (*ptry2++);//1ª matriz
*ptrx++ = *ptrx - coef * (*ptrx2++);//2ª matriz
}
}
}
void test_6_inverse(int dim)
{
vector< vector<double> > vec1(dim, vector<double>(dim));
for (int i=0;i<dim;i++)
for (int j = 0; j < dim; j++)
{
vec1[i][j] = (-1.0 + 2.0*rand() / RAND_MAX) * 10000;
}
vector< vector<double> > vec2,vec3;
double ini, end;
ini = (double)clock();
vec2 = inverse(vec1);
end = (double)clock();
cout << "=== Time inverse unthreaded=" << (end - ini) / CLOCKS_PER_SEC << endl;
ini=end;
vec3 = inverse_th(vec1);
end = (double)clock();
cout << "=== Time inverse threaded=" << (end - ini) / CLOCKS_PER_SEC << endl;
cout<<vec2[2][2]<<" "<<vec3[2][2]<<endl;//to make the sw to do de inverse
cout << endl;
}
int main()
{
test_6_inverse(1000);
cout << endl << "=== END ===" << endl; getchar();
return 1;
}
After looking deeper in the code of the colum_zero() function I have seen that one thread rewrites in the data to be used by another threads, so the threads are not INDEPENDENT from each other. Fortunately the compiler detect it and avoid it.
Conclusions:
It is not recommended to try Gauss-Jordan method alone to make multithreads
If somebody detects that in multithread is slower and the initial function is spreaded correctly for every thread, perhaps is due one thread results are used by another
The main function inverse() works and can be used by other programmers, so this question should not be deleted
Non answered question:
What is a matrix inverse method that could be spreaded in a lot of independent threads to be used in a gpu?

Gradient descent converging towards the wrong value

I'm trying to implement a gradient descent algorithm in C++. Here's the code I have so far :
#include <iostream>
double X[] {163,169,158,158,161,172,156,161,154,145};
double Y[] {52, 68, 49, 73, 71, 99, 50, 82, 56, 46 };
double m, p;
int n = sizeof(X)/sizeof(X[0]);
int main(void) {
double alpha = 0.00004; // 0.00007;
m = (Y[1] - Y[0]) / (X[1] - X[0]);
p = Y[0] - m * X[0];
for (int i = 1; i <= 8; i++) {
gradientStep(alpha);
}
return 0;
}
double Loss_function(void) {
double res = 0;
double tmp;
for (int i = 0; i < n; i++) {
tmp = Y[i] - m * X[i] - p;
res += tmp * tmp;
}
return res / 2.0 / (double)n;
}
void gradientStep(double alpha) {
double pg = 0, mg = 0;
for (int i = 0; i < n; i++) {
pg += Y[i] - m * X[i] - p;
mg += X[i] * (Y[i] - m * X[i] - p);
}
p += alpha * pg / n;
m += alpha * mg / n;
}
This code converges towards m = 2.79822, p = -382.666, and an error of 102.88. But if I use my calculator to find out the correct linear regression model, I find that the correct values of m and p should respectively be 1.601 and -191.1.
I also noticed that the algorithm won't converge for alpha > 0.00007, which seems quite low, and the value of p barely changes during the 8 iterations (or even after 2000 iterations).
What's wrong with my code?
Here's a good overview of the algorithm I'm trying to implement. The values of theta0 and theta1 are called p and m in my program.
Other implementation in python
More about the algorithm
This link gives a comprehensive view of the algorithm; it turns out I was following a completely wrong approach.
The following code does not work properly (and I have no plans to work on it further), but should put on track anyone who's confronted to the same problem as me :
#include <vector>
#include <iostream>
typedef std::vector<double> vect;
std::vector<double> y, omega(2, 0), omega2(2, 0);;
std::vector<std::vector<double>> X;
int n = 10;
int main(void) {
/* Initialize x so that each members contains (1, x_i) */
/* Initialize x so that each members contains y_i */
double alpha = 0.00001;
display();
for (int i = 1; i <= 8; i++) {
gradientStep(alpha);
display();
}
return 0;
}
double f_function(const std::vector<double> &x) {
double c;
for (unsigned int i = 0; i < omega.size(); i++) {
c += omega[i] * x[i];
}
return c;
}
void gradientStep(double alpha) {
for (int i = 0; i < n; i++) {
for (unsigned int j = 0; j < X[0].size(); j++) {
omega2[j] -= alpha/(double)n * (f_function(X[i]) - y[i]) * X[i][j];
}
}
omega = omega2;
}
void display(void) {
double res = 0, tmp = 0;
for (int i = 0; i < n; i++) {
tmp = y[i] - f_function(X[i]);
res += tmp * tmp; // Loss functionn
}
std::cout << "omega = ";
for (unsigned int i = 0; i < omega.size(); i++) {
std::cout << "[" << omega[i] << "] ";
}
std::cout << "\tError : " << res * .5/(double)n << std::endl;
}

Seeking knowledge on array of arrays memory performance

Context: Multichannel real time digital audio processing.
Access pattern: "Column-major", like so:
for (int sample = 0; sample < size; ++sample)
{
for (int channel = 0; channel < size; ++channel)
{
auto data = arr[channel][sample];
// do some computations
}
}
I'm seeking advice on how to make the life easier for the CPU and memory, in general. I realize interleaving the data would be better, but it's not possible.
My theory is, that as long as you sequentially access memory for a while, the CPU will prefetch it - will this hold for N (channel) buffers? What about size of the buffers, any "breaking points"?
Will it be very beneficial to have the channels in contiguous memory (increasing locality), or does that only hold for very small buffers (like, size of cache lines)? We could be talking buffersizes > 100 kb apart.
I guess there would also be a point where the time of the computational part makes memory optimizations negligible - ?
Is this a case, where manual prefetching makes sense?
I could test/profile my own system, but I only have that - 1 system. So any design choices I make may only positively affect that particular system. Any knowledge on these matters are appreciated, links, literature etc., platform specific knowledge.
Let me know if the question is too vague, I primarily thought it would be nice to have some wiki-ish experience / info on this area.
edit:
I created a program, that tests the three cases I mentioned (distant, adjecant and contiguous mentioned in supposedly increasing performance order), which tests these patterns on small and big data sets. Maybe people will run it and report anomalies.
#include <iostream>
#include <chrono>
#include <algorithm>
const int b = 196000;
const int s = 64 / sizeof(float);
const int extra_it = 16;
float sbuf1[s];
float bbuf1[b];
int main()
{
float sbuf2[s];
float bbuf2[b];
float * sbuf3 = new float[s];
float * bbuf3 = new float[b];
float * sbuf4 = new float[s * 3];
float * bbuf4 = new float[b * 3];
float use = 0;
while (1)
{
using namespace std;
int c;
bool sorb;
cout << "small or big test (0/1)? ";
if (!(cin >> sorb))
return -1;
cout << endl << "test distant buffers (0), contiguous access (1) or adjecant access (2)? ";
if (!(cin >> c))
return -1;
auto t = std::chrono::high_resolution_clock::now();
if (c == 0)
{
// "worst case scenario", 3 distant buffers constantly touched
if (sorb)
{
for (int k = 0; k < b * extra_it; ++k)
for (int i = 0; i < s; ++i)
{
sbuf1[i] = k; // static memory
sbuf2[i] = k; // stack memory
sbuf3[i] = k; // heap memory
}
}
else
{
for (int k = 0; k < s * extra_it; ++k)
for (int i = 0; i < b; ++i)
{
bbuf1[i] = k; // static memory
bbuf2[i] = k; // stack memory
bbuf3[i] = k; // heap memory
}
}
}
else if (c == 1)
{
// "best case scenario", only contiguous memory touched, interleaved
if (sorb)
{
for (int k = 0; k < b * extra_it; ++k)
for (int i = 0; i < s * 3; i += 3)
{
sbuf4[i] = k;
sbuf4[i + 1] = k;
sbuf4[i + 2] = k;
}
}
else
{
for (int k = 0; k < s * extra_it; ++k)
for (int i = 0; i < b * 3; i += 3)
{
bbuf4[i] = k;
bbuf4[i + 1] = k;
bbuf4[i + 2] = k;
}
}
}
else if (c == 2)
{
// "compromise", adjecant memory buffers touched
if (sorb)
{
auto b1 = sbuf4;
auto b2 = sbuf4 + s;
auto b3 = sbuf4 + s * 2;
for (int k = 0; k < b * extra_it; ++k)
for (int i = 0; i < s; ++i)
{
b1[i] = k;
b2[i] = k;
b3[i] = k;
}
}
else
{
auto b1 = bbuf4;
auto b2 = bbuf4 + b;
auto b3 = bbuf4 + b * 2;
for (int k = 0; k < s * extra_it; ++k)
for (int i = 0; i < b; ++i)
{
b1[i] = k;
b2[i] = k;
b3[i] = k;
}
}
}
else
break;
cout << chrono::duration_cast<chrono::milliseconds>(chrono::high_resolution_clock::now() - t).count() << " ms" << endl;
// basically just touching the buffers, avoiding clever optimizations
use += std::accumulate(sbuf1, sbuf1 + s, 0);
use += std::accumulate(sbuf2, sbuf2 + s, 0);
use += std::accumulate(sbuf3, sbuf3 + s, 0);
use += std::accumulate(sbuf4, sbuf4 + s * 3, 0);
use -= std::accumulate(bbuf1, bbuf1 + b, 0);
use -= std::accumulate(bbuf2, bbuf2 + b, 0);
use -= std::accumulate(bbuf3, bbuf3 + b, 0);
use -= std::accumulate(bbuf4, bbuf4 + b * 3, 0);
}
std::cout << use;
std::cin.get();
}
On my Intel i7-3740qm surprisingly, distant buffers consistently outperforms the more locality-friendly tests. It is close, however.

How to toggle a variable in a loop

Variable i toggles between 2 and 3 and multiplied into a, as in the following example:
a=2;
a=a*i // a=2*2=4 i=2
a=a*i // a=4*3=12 i=3
a=a*i // a=12*2=24 i=2
a=a*i // a=24*3=72 i=3
which goes on as long as a is < 1000.
How can I give the i two values sequentially as shown in the example?
int a = 2, i = 2;
while( a < 1000 )
{
a *= i;
i = 5 - i;
}
and many other ways.
You should be able to use a loop
int a = 2;
bool flip = true;
while (a < 1000)
{
a *= flip ? 2 : 3;
flip = !flip;
}
You can't have i be equal to two values at the same time. You can however make i alternate between 2 and 3 until a < 1000. Below is the code;
int a = 2;
int counter = 0;
while (a < 1000) {
if (counter % 2 == 0) {
a *= 2;
}
else {
a *= 3;
}
counter++;
}
Here's a quick solution that doesn't involve a conditional.
int c = 0;
while (a < 1000)
a *= (c++ % 2) + 2;
or even
for(int c = 0; a < 1000; c++)
a *= (c % 2) + 2;
The modulo is found, which results in either a 0 or a 1 and then shifted up by 2 resulting in either 2 or 3.
Here's another efficient way to do this.
#include <iostream>
using namespace std;
int main() {
int its_bacon_time;
int i = ++(its_bacon_time = 0);
int y = 18;
int z = 9;
bool flag = !false;
int sizzle;
typedef bool decision_property;
#define perhaps (decision_property)(-42*42*-42)
#ifdef perhaps
# define YUM -
# define YUMMM return
#endif
bool bacon = !(bool) YUM(sizzle = 6);
if(flag)
std::cout << "YEP" << std::endl;
while (flag) {
if (bacon = !bacon)
flag = !flag; // YUM()?
if (YUM((YUM-i)YUM(i*2))+1>=((0x1337|0xECC8)&0x3E8))
(*((int*)&flag)) &= 0x8000;
else
flag = perhaps;
std::cout << i << " ";
int multiplicative_factor = y / (bacon ? z : y);
int* temporal_value_indicator = &i;
(**(&temporal_value_indicator)) *=
(((((multiplicative_factor & 0x0001) > 0) ? sizzle : bacon) // ~yum~
<< 1) ^ (bacon? 0 : 15));
std::cout << (((((multiplicative_factor & 0x0001) > 0) ? sizzle : bacon) // ~yum~
<< 1) ^ (bacon? 0 : 15)) << std::endl;
}
YUMMM its_bacon_time;
}
Point is that you should probably try something yourself first before asking for something that is really simple to achieve.
int main()
{
int a = 2;
int multiplier;
for (int i = 0; a < 1000; ++i)
{
multiplier = (i % 2) ? 2 : 3;
a *= multiplier;
}
}