Most efficient way to determine if value between two other values, inclusive - c++

Similar to Fastest way to determine if an integer is between two integers (inclusive) with known sets of values, I wish to figure out if some value (most likely a double-precision floating point number) is between two other values (of the same type). The caveat is that I don't know already which value is larger than the other and I'm trying to determine if I should/how I should avoid using std::max/min. Here is some code I've tried to test this with already:
inline bool inRangeMult(double p, double v1, double v2) {
return (p - v1) * (p - v2) <= 0;
}
inline bool inRangeMinMax(double p, double v1, double v2) {
return p <= std::max(v1, v2) && p >= std::min(v1, v2);
}
inline bool inRangeComp(double p, double v1, double v2) {
return p < v1 != p < v2;
}
int main()
{
double a = 1e4;
std::clock_t start;
double duration;
bool res = false;
start = std::clock();
for (size_t i = 0; i < 2e4; ++i) {
for (size_t j = 0; j < 2e4; ++j) {
res = inRangeMult(a, i, j) ? res : !res;
}
}
duration = std::clock() - start;
std::cout << "InRangeMult: " << duration << std::endl;
start = std::clock();
for (size_t i = 0; i < 2e4; ++i) {
for (size_t j = 0; j < 2e4; ++j) {
res = inRangeMinMax(a, i, j) ? res : !res;
}
}
duration = std::clock() - start;
std::cout << "InRangeMinMax: " << duration << std::endl;
start = std::clock();
for (size_t i = 0; i < 2e4; ++i) {
for (size_t j = 0; j < 2e4; ++j) {
res = inRangeComp(a, i, j) ? res : !res;
}
}
duration = std::clock() - start;
std::cout << "InRangeComp: " << duration << std::endl;
std::cout << "Tricking the compiler by printing inane res: " << res << std::endl;
}
On most runs I'm finding that using std::min/max is still fastest, (latest run prints 346, 310, and 324 respectively), but I'm not 100% confident this is the best test setup, or that I've exhausted all of the reasonable implementations.
I'd appreciate anyone's input with a better profiling setup and/or better implementation.
EDIT: Updated code to make it less prone to compiler optimization.
2nd EDIT: Tweaked value of a and number of iterations. Results for one run are:
inRangeMult: 1337
inRangeMinMaz: 1127
inRangeComp: 729

The first test:
(p - v1) * (p - v2) <= 0
May result in overflow or underflow, due to the arithmetic operations.
The last one:
p < v1 != p < v2
Doesn't provide the same results as the others, which are inclusive in respect of the boundaries v1 and v2. It's an admittedly small difference, considering the range and precision of the type double, but it may be significant.
Another option is to explicitly expand the logic of the second function:
p <= std::max(v1, v2) && p >= std::min(v1, v2) // v1 and v2 are compared twice
Into something like this:
bool inRangeComp(double p, double v1, double v2) {
return v1 <= v2 // <- v1 and v2 are compared only once
? v1 <= p && p <= v2
: v2 <= p && p <= v1;
}
At least one compiler (gcc 8.2), HERE (thanks to jarod42 for the linked snippet), seems to prefer this version over the alternatives.

Related

Why is multi-threading of matrix calculation not faster than single-core?

this is my first time using multi-threading to speed up a heavy calculation.
Background: The idea is to calculate a Kernel Covariance matrix, by reading a list of 3D points x_test and calculating the corresponding matrix, which has dimensions x_test.size() x x_test.size().
I already sped up the calculations by only calculating the lower triangluar matrix. Since all the calculations are independent from each other I tried to speed up the process (x_test.size() = 27000 in my case) by splitting the calculations of the matrix entries row-wise, assigning a range of rows to each thread.
On a single core the calculations took about 280 seconds each time, on 4 cores it took 270-290 seconds.
main.cpp
int main(int argc, char *argv[]) {
double sigma0sq = 1;
double lengthScale [] = {0.7633, 0.6937, 3.3307e+07};
const std::vector<std::vector<double>> x_test = parse2DCsvFile(inputPath);
/* Finding data slices of similar size */
//This piece of code works, each thread is assigned roughly the same number of matrix entries
int numElements = x_test.size()*x_test.size()/2;
const int numThreads = 4;
int elemsPerThread = numElements / numThreads;
std::vector<int> indices;
int j = 0;
for(std::size_t i=1; i<x_test.size()+1; ++i){
int prod = i*(i+1)/2 - j*(j+1)/2;
if (prod > elemsPerThread) {
i--;
j = i;
indices.push_back(i);
if(indices.size() == numThreads-1)
break;
}
}
indices.insert(indices.begin(), 0);
indices.push_back(x_test.size());
/* Spreding calculations to multiple threads */
std::vector<std::thread> threads;
for(std::size_t i = 1; i < indices.size(); ++i){
threads.push_back(std::thread(calculateKMatrixCpp, x_test, lengthScale, sigma0sq, i, indices.at(i-1), indices.at(i)));
}
for(auto & th: threads){
th.join();
}
return 0;
}
As you can see, each thread performs the following calculations on the data assigned to it:
void calculateKMatrixCpp(const std::vector<std::vector<double>> xtest, double lengthScale[], double sigma0sq, int threadCounter, int start, int stop){
char buffer[8192];
std::ofstream out("lower_half_matrix_" + std::to_string(threadCounter) +".csv");
out.rdbuf()->pubsetbuf(buffer, 8196);
for(int i = start; i < stop; ++i){
for(int j = 0; j < i+1; ++j){
double kij = seKernel(xtest.at(i), xtest.at(j), lengthScale, sigma0sq);
if (j!=0)
out << ',';
out << kij;
}
if(i!=xtest.size()-1 )
out << '\n';
}
out.close();
}
and
double seKernel(const std::vector<double> x1,const std::vector<double> x2, double lengthScale[], double sigma0sq) {
double sum(0);
for(std::size_t i=0; i<x1.size();i++){
sum += pow((x1.at(i)-x2.at(i))/lengthScale[i],2);
}
return sigma0sq*exp(-0.5*sum);
}
Aspects I considered
locking by simultaneous access to data vector -> I don't pass a reference to the threads, but a copy of the data. I know this is not optimal in terms of RAM usage, but as far as I know this should prevent simultaneous data access since every thread has its own copy
Output -> every thread writes its part of the lower triangular matrix to its own file. My task manager doesn't indicate a full SSD utilization in the slightest
Compiler and machine
Windows 11
GNU GCC Compiler
Code::Blocks (although I don't think that should be of importance)
There are many details that can be improved in your code, but I think the two biggest issues are:
using vectors or vectors, which leads to fragmented data;
writing each piece of data to file as soon as its value is computed.
The first point is easy to fix: use something like std::vector<std::array<double, 3>>. In the code below I use an alias to make it more readable:
using Point3D = std::array<double, 3>;
std::vector<Point3D> x_test;
The second point is slightly harder to address. I assume you wanted to write to the disk inside each thread because you couldn't manage to write to a shared buffer that you could then write to a file.
Here is a way to do exactly that:
void calculateKMatrixCpp(
std::vector<Point3D> const& xtest, Point3D const& lengthScale, double sigma0sq,
int threadCounter, int start, int stop, std::vector<double>& kMatrix
) {
// ...
double& kij = kMatrix[i * xtest.size() + j];
kij = seKernel(xtest[i], xtest[j], lengthScale, sigma0sq);
// ...
}
// ...
threads.push_back(std::thread(
calculateKMatrixCpp, x_test, lengthScale, sigma0sq,
i, indices[i-1], indices[i], std::ref(kMatrix)
));
Here, kMatrix is the shared buffer and represents the whole matrix you are trying to compute. You need to pass it to the thread via std::ref. Each thread will write to a different location in that buffer, so there is no need for any mutex or other synchronization.
Once you make these changes and try to write kMatrix to the disk, you will realize that this is the part that takes the most time, by far.
Below is the full code I tried on my machine, and the computation time was about 2 seconds whereas the writing-to-file part took 300 seconds! No amount of multithreading can speed that up.
If you truly want to write all that data to the disk, you may have some luck with file mapping. Computing the exact size needed should be easy enough if all values have the same number of digits, and it looks like you could write the values with multithreading. I have never done anything like that, so I can't really say much more about it, but it looks to me like the fastest way to write multiple gigabytes of memory to the disk.
#include <vector>
#include <thread>
#include <iostream>
#include <string>
#include <cmath>
#include <array>
#include <random>
#include <fstream>
#include <chrono>
using Point3D = std::array<double, 3>;
auto generateSampleData() -> std::vector<Point3D> {
static std::minstd_rand g(std::random_device{}());
std::uniform_real_distribution<> d(-1.0, 1.0);
std::vector<Point3D> data;
data.reserve(27000);
for (auto i = 0; i < 27000; ++i) {
data.push_back({ d(g), d(g), d(g) });
}
return data;
}
double seKernel(Point3D const& x1, Point3D const& x2, Point3D const& lengthScale, double sigma0sq) {
double sum = 0.0;
for (auto i = 0u; i < 3u; ++i) {
double distance = (x1[i] - x2[i]) / lengthScale[i];
sum += distance*distance;
}
return sigma0sq * std::exp(-0.5*sum);
}
void calculateKMatrixCpp(std::vector<Point3D> const& xtest, Point3D const& lengthScale, double sigma0sq, int threadCounter, int start, int stop, std::vector<double>& kMatrix) {
std::cout << "start of thread " << threadCounter << "\n" << std::flush;
for(int i = start; i < stop; ++i) {
for(int j = 0; j < i+1; ++j) {
double& kij = kMatrix[i * xtest.size() + j];
kij = seKernel(xtest[i], xtest[j], lengthScale, sigma0sq);
}
}
std::cout << "end of thread " << threadCounter << "\n" << std::flush;
}
int main() {
double sigma0sq = 1;
Point3D lengthScale = {0.7633, 0.6937, 3.3307e+07};
const std::vector<Point3D> x_test = generateSampleData();
/* Finding data slices of similar size */
//This piece of code works, each thread is assigned roughly the same number of matrix entries
int numElements = x_test.size()*x_test.size()/2;
const int numThreads = 4;
int elemsPerThread = numElements / numThreads;
std::vector<int> indices;
int j = 0;
for(std::size_t i = 1; i < x_test.size()+1; ++i){
int prod = i*(i+1)/2 - j*(j+1)/2;
if (prod > elemsPerThread) {
i--;
j = i;
indices.push_back(i);
if(indices.size() == numThreads-1)
break;
}
}
indices.insert(indices.begin(), 0);
indices.push_back(x_test.size());
auto start = std::chrono::system_clock::now();
std::vector<double> kMatrix(x_test.size() * x_test.size(), 0.0);
std::vector<std::thread> threads;
for (std::size_t i = 1; i < indices.size(); ++i) {
threads.push_back(std::thread(calculateKMatrixCpp, x_test, lengthScale, sigma0sq, i, indices[i - 1], indices[i], std::ref(kMatrix)));
}
for (auto& t : threads) {
t.join();
}
auto end = std::chrono::system_clock::now();
auto elapsed_seconds = std::chrono::duration<double>(end - start).count();
std::cout << "computation time: " << elapsed_seconds << "s" << std::endl;
start = std::chrono::system_clock::now();
constexpr int buffer_size = 131072;
char buffer[buffer_size];
std::ofstream out("matrix.csv");
out.rdbuf()->pubsetbuf(buffer, buffer_size);
for (int i = 0; i < x_test.size(); ++i) {
for (int j = 0; j < i + 1; ++j) {
if (j != 0) {
out << ',';
}
out << kMatrix[i * x_test.size() + j];
}
if (i != x_test.size() - 1) {
out << '\n';
}
}
end = std::chrono::system_clock::now();
elapsed_seconds = std::chrono::duration<double>(end - start).count();
std::cout << "writing time: " << elapsed_seconds << "s" << std::endl;
}
Okey I've wrote implementation with optimized formatting.
By using #Nelfeal code it was taking on my system around 250 seconds for the run to complete with write time taking the most by far. Or rather std::ofstream formatting taking most of the time.
I've written a C++20 version via std::format_to/format. It is a multi-threaded version that takes around 25-40 seconds to complete all the computations, formatting, and writing. If run in a single thread, it takes on my system around 70 seconds. Same performance should be achievable via fmt library on C++11/14/17.
Here is the code:
import <vector>;
import <thread>;
import <iostream>;
import <string>;
import <cmath>;
import <array>;
import <random>;
import <fstream>;
import <chrono>;
import <format>;
import <filesystem>;
using Point3D = std::array<double, 3>;
auto generateSampleData(Point3D scale) -> std::vector<Point3D>
{
static std::minstd_rand g(std::random_device{}());
std::uniform_real_distribution<> d(-1.0, 1.0);
std::vector<Point3D> data;
data.reserve(27000);
for (auto i = 0; i < 27000; ++i)
{
data.push_back({ d(g)* scale[0], d(g)* scale[1], d(g)* scale[2] });
}
return data;
}
double seKernel(Point3D const& x1, Point3D const& x2, Point3D const& lengthScale, double sigma0sq) {
double sum = 0.0;
for (auto i = 0u; i < 3u; ++i) {
double distance = (x1[i] - x2[i]) / lengthScale[i];
sum += distance * distance;
}
return sigma0sq * std::exp(-0.5 * sum);
}
void calculateKMatrixCpp(std::vector<Point3D> const& xtest, Point3D lengthScale, double sigma0sq, int threadCounter, int start, int stop, std::filesystem::path localPath)
{
using namespace std::string_view_literals;
std::vector<char> buffer;
buffer.reserve(15'000);
std::ofstream out(localPath);
std::cout << std::format("starting thread {}: from {} to {}\n"sv, threadCounter, start, stop);
for (int i = start; i < stop; ++i)
{
for (int j = 0; j < i; ++j)
{
double kij = seKernel(xtest[i], xtest[j], lengthScale, sigma0sq);
std::format_to(std::back_inserter(buffer), "{:.6g}, "sv, kij);
}
double kii = seKernel(xtest[i], xtest[i], lengthScale, sigma0sq);
std::format_to(std::back_inserter(buffer), "{:.6g}\n"sv, kii);
out.write(buffer.data(), buffer.size());
buffer.clear();
}
}
int main() {
double sigma0sq = 1;
Point3D lengthScale = { 0.7633, 0.6937, 3.3307e+07 };
const std::vector<Point3D> x_test = generateSampleData(lengthScale);
/* Finding data slices of similar size */
//This piece of code works, each thread is assigned roughly the same number of matrix entries
int numElements = x_test.size() * (x_test.size()+1) / 2;
const int numThreads = 3;
int elemsPerThread = numElements / numThreads;
std::vector<int> indices;
int j = 0;
for (std::size_t i = 1; i < x_test.size() + 1; ++i) {
int prod = i * (i + 1) / 2 - j * (j + 1) / 2;
if (prod > elemsPerThread) {
i--;
j = i;
indices.push_back(i);
if (indices.size() == numThreads - 1)
break;
}
}
indices.insert(indices.begin(), 0);
indices.push_back(x_test.size());
auto start = std::chrono::system_clock::now();
std::vector<std::thread> threads;
using namespace std::string_view_literals;
for (std::size_t i = 1; i < indices.size(); ++i)
{
threads.push_back(std::thread(calculateKMatrixCpp, std::ref(x_test), lengthScale, sigma0sq, i, indices[i - 1], indices[i], std::format("./matrix_{}.csv"sv, i-1)));
}
for (auto& t : threads)
{
t.join();
}
auto end = std::chrono::system_clock::now();
auto elapsed_seconds = std::chrono::duration<double>(end - start);
std::cout << std::format("total elapsed time: {}"sv, elapsed_seconds);
return 0;
}
Note: I used 6 digits of precision here as it is the default for std::ofstream. More digits means more writing time to disk and lower performance.

Fastest way to get square root in float value

I am trying to find a fastest way to make square root of any float number in C++. I am using this type of function in a huge particles movement calculation like calculation distance between two particle, we need a square root etc. So If any suggestion it will be very helpful.
I have tried and below is my code
#include <math.h>
#include <iostream>
#include <chrono>
using namespace std;
using namespace std::chrono;
#define CHECK_RANGE 100
inline float msqrt(float a)
{
int i;
for (i = 0;i * i <= a;i++);
float lb = i - 1; //lower bound
if (lb * lb == a)
return lb;
float ub = lb + 1; // upper bound
float pub = ub; // previous upper bound
for (int j = 0;j <= 20;j++)
{
float ub2 = ub * ub;
if (ub2 > a)
{
pub = ub;
ub = (lb + ub) / 2; // mid value of lower and upper bound
}
else
{
lb = ub;
ub = pub;
}
}
return ub;
}
void check_msqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
msqrt(i);
}
}
void check_sqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
sqrt(i);
}
}
int main()
{
auto start1 = high_resolution_clock::now();
check_msqrt();
auto stop1 = high_resolution_clock::now();
auto duration1 = duration_cast<microseconds>(stop1 - start1);
cout << "Time for check_msqrt = " << duration1.count() << " micro secs\n";
auto start2 = high_resolution_clock::now();
check_sqrt();
auto stop2 = high_resolution_clock::now();
auto duration2 = duration_cast<microseconds>(stop2 - start2);
cout << "Time for check_sqrt = " << duration2.count() << " micro secs";
//cout << msqrt(3);
return 0;
}
output of above code showing the implemented method 4 times more slow than sqrt of math.h file.
I need faster than math.h version.
In short, I do not think it is possible to implement something generally faster than the standard library version of sqrt.
Performance is a very important parameter when implementing standard library functions and it is fair to assume that such a commonly used function as sqrt is optimized as much as possible.
Beating the standard library function would require a special case, such as:
Availability of a suitable assembler instruction - or other specialized hardware support - on the particular system for which the standard library has not been specialized.
Knowledge of the needed range or precision. The standard library function must handle all cases specified by the standard. If the application only needs a subset of that or maybe only requires an approximate result then perhaps an optimization is possible.
Making a mathematical reduction of the calculations or combine some calculation steps in a smart way so an efficient implementation can be made for that combination.
Here's another alternative to binary search. It may not be as fast as std::sqrt, haven't tested it. But it will definitely be faster than your binary search.
auto
Sqrt(float x)
{
using F = decltype(x);
if (x == 0 || x == INFINITY || isnan(x))
return x;
if (x < 0)
return F{NAN};
int e;
x = std::frexp(x, &e);
if (e % 2 != 0)
{
++e;
x /= 2;
}
auto y = (F{-160}/567*x + F{2'848}/2'835)*x + F{155}/567;
y = (y + x/y)/2;
y = (y + x/y)/2;
return std::ldexp(y, e/2);
}
After getting +/-0, nan, inf, and negatives out of the way, it works by decomposing the float into a mantissa in the range of [1/4, 1) times 2e where e is an even integer. The answer is then sqrt(mantissa)* 2e/2.
Finding the sqrt of the mantissa can be guessed at with a least squares quadratic curve fit in the range [1/4, 1]. Then that good guess is refined by two iterations of Newton–Raphson. This will get you within 1 ulp of the correctly rounded result. A good std::sqrt will typically get that last bit correct.
I have also tried with the algorithm mention in https://en.wikipedia.org/wiki/Fast_inverse_square_root, but not found desired result, please check
#include <math.h>
#include <iostream>
#include <chrono>
#include <bit>
#include <limits>
#include <cstdint>
using namespace std;
using namespace std::chrono;
#define CHECK_RANGE 10000
inline float msqrt(float a)
{
int i;
for (i = 0;i * i <= a;i++);
float lb = i - 1; //lower bound
if (lb * lb == a)
return lb;
float ub = lb + 1; // upper bound
float pub = ub; // previous upper bound
for (int j = 0;j <= 20;j++)
{
float ub2 = ub * ub;
if (ub2 > a)
{
pub = ub;
ub = (lb + ub) / 2; // mid value of lower and upper bound
}
else
{
lb = ub;
ub = pub;
}
}
return ub;
}
/* mentioned here -> https://en.wikipedia.org/wiki/Fast_inverse_square_root */
inline float Q_sqrt(float number)
{
union Conv {
float f;
uint32_t i;
};
Conv conv;
conv.f= number;
conv.i = 0x5f3759df - (conv.i >> 1);
conv.f *= 1.5F - (number * 0.5F * conv.f * conv.f);
return 1/conv.f;
}
void check_Qsqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
Q_sqrt(i);
}
}
void check_msqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
msqrt(i);
}
}
void check_sqrt()
{
for (size_t i = 0; i < CHECK_RANGE; i++)
{
sqrt(i);
}
}
int main()
{
auto start1 = high_resolution_clock::now();
check_msqrt();
auto stop1 = high_resolution_clock::now();
auto duration1 = duration_cast<microseconds>(stop1 - start1);
cout << "Time for check_msqrt = " << duration1.count() << " micro secs\n";
auto start2 = high_resolution_clock::now();
check_sqrt();
auto stop2 = high_resolution_clock::now();
auto duration2 = duration_cast<microseconds>(stop2 - start2);
cout << "Time for check_sqrt = " << duration2.count() << " micro secs\n";
auto start3 = high_resolution_clock::now();
check_Qsqrt();
auto stop3 = high_resolution_clock::now();
auto duration3 = duration_cast<microseconds>(stop3 - start3);
cout << "Time for check_Qsqrt = " << duration3.count() << " micro secs\n";
//cout << Q_sqrt(3);
//cout << sqrt(3);
//cout << msqrt(3);
return 0;
}

Slow std::vector vs [] in C++ - Why?

I am a bit rusty with C++ - having used it 20 years ago. I am trying to understand why std::vector is so much slower than native arrays in the following code. Can anyone explain it to me? I would much prefer using the standard libraries but not at the cost of this performance penalty:
Vector:
const int grid_e_rows = 50;
const int grid_e_cols = 50;
int H(std::vector<std::vector<int>> &sigma) {
int h = 0;
for (int r = 0; r < grid_e_rows; ++r) {
int r2 = (r + 1) % grid_e_rows;
for (int c = 0; c < grid_e_cols; ++c) {
int c2 = (c + 1) % grid_e_cols;
h += 1 * sigma[r][c] * sigma[r][c2] + 1 * sigma[r][c] * sigma[r2][c];
}
}
return -h;
}
int main() {
auto start = std::chrono::steady_clock::now();
std::vector<std::vector<int>> sigma_a(grid_e_rows, std::vector<int>(grid_e_cols));
for (int i=0;i<600000;i++)
H(sigma_a);
auto end = std::chrono::steady_clock::now();
std::cout << "Calculation completed in " << std::chrono::duration_cast<std::chrono::seconds>(end - start).count()
<< " seconds";
return 0;
}
Output is:
Calculation completed in 23 seconds
Array:
const int grid_e_rows = 50;
const int grid_e_cols = 50;
typedef int (*Sigma)[grid_e_rows][grid_e_cols];
int H(Sigma sigma) {
int h = 0;
for (int r = 0; r < grid_e_rows; ++r) {
int r2 = (r + 1) % grid_e_rows;
for (int c = 0; c < grid_e_cols; ++c) {
int c2 = (c + 1) % grid_e_cols;
h += 1 * (*sigma)[r][c] * (*sigma)[r][c2] + 1 * (*sigma)[r][c] * (*sigma)[r2][c];
}
}
return -h;
}
int main() {
auto start = std::chrono::steady_clock::now();
int sigma_a[grid_e_rows][grid_e_cols];
for (int i=0;i<600000;i++)
H(&sigma_a);
auto end = std::chrono::steady_clock::now();
std::cout << "Calculation completed in " << std::chrono::duration_cast<std::chrono::seconds>(end - start).count()
<< " seconds";
return 0;
}
Output is:
Calculation completed in 6 seconds
Any help would be appreciated.
First, you're timing the initialization. For the array case, there is none (the array is completely uninitialized). In the vector case, the vector is initialized to zero and then copied into each row.
But the primary reason is cache locality. The array case is a single block of 50*50 integers which are all continuous in memory, and they can trivially fit in L1D cache. In the vector case, each row is allocated dynamically which means their addresses are almost certainly not contiguous and are instead spread all over the program's address space. Accessing one does not pull the adjacent rows into the cache.
Also, because the rows are relatively small, cache space is wasted on adjacent unrelated data, meaning even after you've touched everything to pull it into memory it may not fit in L1 anymore. And lastly, the access pattern is a lot less linear, and it may be beyond the capability of a hardware prefetcher to predict.
You are not compiling with optimizations.
Compare:
With vector of vector
With array
To give you a small taste of what the optimizer might be doing for you, consider the following modification to your H() function for the vector of vector case.
int H(std::vector<std::vector<int>> &arg) {
int h = 0;
auto sigma = arg.data();
for (int r = 0; r < grid_e_rows; ++r) {
int r2 = (r + 1) % grid_e_rows;
auto sr = sigma[r].data();
auto sr2 = sigma[r2].data();
for (int c = 0; c < grid_e_cols; ++c) {
int c2 = (c + 1) % grid_e_cols;
h += 1 * sr[c] * sr[c2] + 1 * sr[c] * sr2[c];
}
}
return -h;
}
You will find that without optimizations, this version will run closer to the performance of your array version.

Faster way to convert a vector of vectors to a single contiguous vector with opposite storage order

I have a std::vector<std::vector<double>> that I am trying to convert to a single contiguous vector as fast as possible. My vector has a shape of roughly 4000 x 50.
The problem is, sometimes I need my output vector in column-major contiguous order (just concatenating the interior vectors of my 2d input vector), and sometimes I need my output vector in row-major contiguous order, effectively requiring a transpose.
I have found that a naive for loop is quite fast for conversion to a column-major vector:
auto to_dense_column_major_naive(std::vector<std::vector<double>> const & vec)
-> std::vector<double>
{
auto n_col = vec.size();
auto n_row = vec[0].size();
std::vector<double> out_vec(n_col * n_row);
for (size_t i = 0; i < n_col; ++i)
for (size_t j = 0; j < n_row; ++j)
out_vec[i * n_row + j] = vec[i][j];
return out_vec;
}
But obviously a similar approach is very slow for row-wise conversion, because of all of the cache misses. So for row-wise conversion, I thought a blocking strategy to promote cache locality might be my best bet:
auto to_dense_row_major_blocking(std::vector<std::vector<double>> const & vec)
-> std::vector<double>
{
auto n_col = vec.size();
auto n_row = vec[0].size();
std::vector<double> out_vec(n_col * n_row);
size_t block_side = 8;
for (size_t l = 0; l < n_col; l += block_side) {
for (size_t k = 0; k < n_row; k += block_side) {
for (size_t j = l; j < l + block_side && j < n_col; ++j) {
auto const &column = vec[j];
for (size_t i = k; i < k + block_side && i < n_row; ++i)
out_vec[i * n_col + j] = column[i];
}
}
}
return out_vec;
}
This is considerably faster than a naive loop for row-major conversion, but still almost an order of magnitude slower than naive column-major looping on my input size.
My question is, is there a faster approach to converting a (column-major) vector of vectors of doubles to a single contiguous row-major vector? I am struggling to reason about what the limit of speed of this code should be, and am thus questioning whether I'm missing something obvious. My assumption was that blocking would give me a much larger speedup then it appears to actually give.
The chart was generated using QuickBench (and somewhat verified with GBench locally on my machine) with this code: (Clang 7, C++20, -O3)
auto to_dense_column_major_naive(std::vector<std::vector<double>> const & vec)
-> std::vector<double>
{
auto n_col = vec.size();
auto n_row = vec[0].size();
std::vector<double> out_vec(n_col * n_row);
for (size_t i = 0; i < n_col; ++i)
for (size_t j = 0; j < n_row; ++j)
out_vec[i * n_row + j] = vec[i][j];
return out_vec;
}
auto to_dense_row_major_naive(std::vector<std::vector<double>> const & vec)
-> std::vector<double>
{
auto n_col = vec.size();
auto n_row = vec[0].size();
std::vector<double> out_vec(n_col * n_row);
for (size_t i = 0; i < n_col; ++i)
for (size_t j = 0; j < n_row; ++j)
out_vec[j * n_col + i] = vec[i][j];
return out_vec;
}
auto to_dense_row_major_blocking(std::vector<std::vector<double>> const & vec)
-> std::vector<double>
{
auto n_col = vec.size();
auto n_row = vec[0].size();
std::vector<double> out_vec(n_col * n_row);
size_t block_side = 8;
for (size_t l = 0; l < n_col; l += block_side) {
for (size_t k = 0; k < n_row; k += block_side) {
for (size_t j = l; j < l + block_side && j < n_col; ++j) {
auto const &column = vec[j];
for (size_t i = k; i < k + block_side && i < n_row; ++i)
out_vec[i * n_col + j] = column[i];
}
}
}
return out_vec;
}
auto to_dense_column_major_blocking(std::vector<std::vector<double>> const & vec)
-> std::vector<double>
{
auto n_col = vec.size();
auto n_row = vec[0].size();
std::vector<double> out_vec(n_col * n_row);
size_t block_side = 8;
for (size_t l = 0; l < n_col; l += block_side) {
for (size_t k = 0; k < n_row; k += block_side) {
for (size_t j = l; j < l + block_side && j < n_col; ++j) {
auto const &column = vec[j];
for (size_t i = k; i < k + block_side && i < n_row; ++i)
out_vec[j * n_row + i] = column[i];
}
}
}
return out_vec;
}
auto make_vecvec() -> std::vector<std::vector<double>>
{
std::vector<std::vector<double>> vecvec(50, std::vector<double>(4000));
std::mt19937 mersenne {2019};
std::uniform_real_distribution<double> dist(-1000, 1000);
for (auto &vec: vecvec)
for (auto &val: vec)
val = dist(mersenne);
return vecvec;
}
static void NaiveColumnMajor(benchmark::State& state) {
// Code before the loop is not measured
auto vecvec = make_vecvec();
for (auto _ : state) {
benchmark::DoNotOptimize(to_dense_column_major_naive(vecvec));
}
}
BENCHMARK(NaiveColumnMajor);
static void NaiveRowMajor(benchmark::State& state) {
// Code before the loop is not measured
auto vecvec = make_vecvec();
for (auto _ : state) {
benchmark::DoNotOptimize(to_dense_row_major_naive(vecvec));
}
}
BENCHMARK(NaiveRowMajor);
static void BlockingRowMajor(benchmark::State& state) {
// Code before the loop is not measured
auto vecvec = make_vecvec();
for (auto _ : state) {
benchmark::DoNotOptimize(to_dense_row_major_blocking(vecvec));
}
}
BENCHMARK(BlockingRowMajor);
static void BlockingColumnMajor(benchmark::State& state) {
// Code before the loop is not measured
auto vecvec = make_vecvec();
for (auto _ : state) {
benchmark::DoNotOptimize(to_dense_column_major_blocking(vecvec));
}
}
BENCHMARK(BlockingColumnMajor);
First of all, I cringe whenever something is qualified as "obviously". That word is often used to cover up a shortcoming in one's deductions.
But obviously a similar approach is very slow for row-wise conversion, because of all of the cache misses.
I'm not sure which is supposed to be obvious: that the row-wise conversion would be slow, or that it's slow because of cache misses. In either case, I find it not obvious. After all, there are two caching considerations here, aren't there? One for reading and one for writing? Let's look at the code from the reading perspective:
row_major_naive
for (size_t i = 0; i < n_col; ++i)
for (size_t j = 0; j < n_row; ++j)
out_vec[j * n_col + i] = vec[i][j];
Successive reads from vec are reads of contiguous memory: vec[i][0] followed by vec[i][1], etc. Very good for caching. So... cache misses? Slow? :) Maybe not so obvious.
Still, there is something to be gleaned from this. The claim is only wrong by claiming "obviously". There are non-locality issues, but they occur on the writing end. (Successive writes are offset by the space for 50 double values.) And empirical testing confirms the slowness. So maybe a solution is to flip on what is considered "obvious"?
row major flipped
for (size_t j = 0; j < n_row; ++j)
for (size_t i = 0; i < n_col; ++i)
out_vec[j * n_col + i] = vec[i][j];
All I did here was reverse the loops. Literally swap the order of those two lines of code then adjust the indentation. Now successive reads are potentially all over the place, as they read from different vectors. However, successive writes are now to contiguous blocks of memory. In one sense, we are in the same situation as before. But just like before, one should measure performance before assuming "fast" or "slow".
NaiveColumnMajor: 3.4 seconds
NaiveRowMajor: 7.7 seconds
FlippedRowMajor: 4.2 seconds
BlockingRowMajor: 4.4 seconds
BlockingColumnMajor: 3.9 seconds
Still slower than the naive column major conversion. However, this approach is not only faster than naive row major, but it's also faster than blocking row major. At least on my computer (using gcc -O3 and obviously :P iterating thousands of times). Mileage may vary. I don't know what the fancy profiling tools would say. The point is that sometimes simpler is better.
For funsies I did a test where the dimensions are swapped (changing from 50 vectors of 4000 elements to 4000 vectors of 50 elements). All methods got hurt this way, but "NaiveRowMajor" took the biggest hit. Worth noting is that "flipped row major" fell behind the blocking version. So, as one might expect, the best tool for the job depends on what exactly the job is.
NaiveColumnMajor: 3.7 seconds
NaiveRowMajor: 16 seconds
FlippedRowMajor: 5.6 seconds
BlockingRowMajor: 4.9 seconds
BlockingColumnMajor: 4.5 seconds
(By the way, I also tried the flipping trick on the blocking version. The change was small -- around 0.2 -- and opposite of flipping the naive version. That is, "flipped blocking" was slower than "blocking" for the question's 50-of-4000 vectors, but faster for my 4000-of-50 variant. Fine tuning might improve the results.)
Update: I did a little more testing with the flipping trick on the blocking version. This version has four loops, so "flipping" is not as straight-forward as when there are only two loops. It looks like swapping the order of the outer two loops is bad for performance, while swapping the inner two loops is good. (Initially, I had done both and gotten mixed results.) When I swapped just the inner loops, I measured 3.8 seconds (and 4.1 seconds in the 4000-of-50 scenario), making this the best row-major option in my tests.
row major hybrid
for (size_t l = 0; l < n_col; l += block_side)
for (size_t i = 0; i < n_row; ++i)
for (size_t j = l; j < l + block_side && j < n_col; ++j)
out_vec[i * n_col + j] = vec[j][i];
(After swapping the inner loops, I merged the middle loops.)
As for the theory behind this, I would guess that this amounts to trying to write one cache block at a time. Once a block is written, try to re-use vectors (the vec[j]) before they get ejected from the cache. After you exhaust those source vectors, move on to a new group of source vectors, again writing full blocks at a time.
I have just added two functions of parallel version of things
#include <ppl.h>
auto ppl_to_dense_column_major_naive(std::vector<std::vector<double>> const & vec)
-> std::vector<double>
{
auto n_col = vec.size();
auto n_row = vec[0].size();
std::vector<double> out_vec(n_col * n_row);
size_t vecLen = out_vec.size();
concurrency::parallel_for(size_t(0), vecLen, [&](size_t i)
{
size_t row = i / n_row;
size_t column = i % n_row;
out_vec[i] = vec[row][column];
});
return out_vec;
}
auto ppl_to_dense_row_major_naive(std::vector<std::vector<double>> const & vec)
-> std::vector<double>
{
auto n_col = vec.size();
auto n_row = vec[0].size();
std::vector<double> out_vec(n_col * n_row);
size_t vecLen = out_vec.size();
concurrency::parallel_for(size_t(0), vecLen, [&](size_t i)
{
size_t column = i / n_col;
size_t row = i % n_col;
out_vec[i] = vec[row][column];
});
return out_vec;
}
and additional benchmark codes for all of them
template< class _Fn, class ... Args >
auto callFncWithPerformance( std::string strFnName, _Fn toCall, Args&& ...args )
{
auto start = std::chrono::high_resolution_clock::now();
auto toRet = toCall( std::forward<Args>(args)... );
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> diff = end - start;
std::cout << strFnName << ": " << diff.count() << " s" << std::endl;
return toRet;
}
template< class _Fn, class ... Args >
auto second_callFncWithPerformance(_Fn toCall, Args&& ...args)
{
std::string strFnName(typeid(toCall).name());
auto start = std::chrono::high_resolution_clock::now();
auto toRet = toCall(std::forward<Args>(args)...);
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> diff = end - start;
std::cout << strFnName << ": " << diff.count() << " s";
return toRet;
}
#define MAKEVEC( FN, ... ) callFncWithPerformance( std::string( #FN ) , FN , __VA_ARGS__ )
int main()
{
//prepare vector
auto vec = make_vecvec();
std::vector< double > vecs[]
{
std::vector<double>(MAKEVEC(to_dense_column_major_naive, vec)),
std::vector<double>(MAKEVEC(to_dense_row_major_naive, vec)),
std::vector<double>(MAKEVEC(ppl_to_dense_column_major_naive, vec)),
std::vector<double>(MAKEVEC(ppl_to_dense_row_major_naive, vec)),
std::vector<double>(MAKEVEC(to_dense_row_major_blocking, vec)),
std::vector<double>(MAKEVEC(to_dense_column_major_blocking, vec)),
};
//system("pause");
return 0;
}
and here below result of these
Debug x64
to_dense_column_major_naive: 0.166859 s
to_dense_row_major_naive: 0.192488 s
ppl_to_dense_column_major_naive: 0.0557423 s
ppl_to_dense_row_major_naive: 0.0514017 s
to_dense_column_major_blocking: 0.118465 s
to_dense_row_major_blocking: 0.117732 s
Debug x86
to_dense_column_major_naive: 0.15242 s
to_dense_row_major_naive: 0.158746 s
ppl_to_dense_column_major_naive: 0.0534966 s
ppl_to_dense_row_major_naive: 0.0484076 s
to_dense_column_major_blocking: 0.111217 s
to_dense_row_major_blocking: 0.107727 s
Release x64
to_dense_column_major_naive: 0.000874 s
to_dense_row_major_naive: 0.0011973 s
ppl_to_dense_column_major_naive: 0.0054639 s
ppl_to_dense_row_major_naive: 0.0012034 s
to_dense_column_major_blocking: 0.0008023 s
to_dense_row_major_blocking: 0.0010282 s
Release x86
to_dense_column_major_naive: 0.0007156 s
to_dense_row_major_naive: 0.0012538 s
ppl_to_dense_column_major_naive: 0.0053351 s
ppl_to_dense_row_major_naive: 0.0013022 s
to_dense_column_major_blocking: 0.0008761 s
to_dense_row_major_blocking: 0.0012404 s
You are quite right, to parallel it is too small set of data.
And also it is too small works.
Although I will be post for someone else to reference these functions.

How should I improve the performance of this C++ code?

The following code operates on two std::vectors v1 and v2, each containing multiple 128-element vectors. Loops through the outer vectors (using i1 and i2) contain an inner loop, designed to limit the combinations of i1 and i2 for which further complex processing is performed. Around 99.9% of the combinations are filtered out.
Unfortunately the filtering loop is a major bottleneck in my program - profiling shows that 26% of the entire run time is spent on the line if(a[k] + b[k] > LIMIT).
const vector<vector<uint16_t>> & v1 = ...
const vector<vector<uint16_t>> & v2 = ...
for(size_t i1 = 0; i1 < v1.size(); ++i1) { //v1.size() and v2.size() about 20000
for(size_t i2 = 0; i2 < v2.size(); ++i2) {
const vector<uint16_t> & a = v1[i1];
const vector<uint16_t> & b = v2[i2];
bool good = true;
for(std::size_t k = 0; k < 128; ++k) {
if(a[k] + b[k] > LIMIT) { //LIMIT is a const uint16_t: approx 16000
good = false;
break;
}
}
if(!good) continue;
// Further processing involving i1 and i2
}
}
I think the performance of this code could be improved by increasing memory locality, plus perhaps vectorizing. Any suggestions on how to do this, or on other improvements that could be made?
You could apply SIMD to the inner loop:
bool good = true;
for(std::size_t k = 0; k < 128; ++k) {
if(a[k] + b[k] > LIMIT) { //LIMIT is a const uint16_t: approx 16000
good = false;
break;
}
as follows:
#include <emmintrin.h> // SSE2 intrinsics
#include <limits.h> // SHRT_MIN
// ...
// some useful constants - declare these somewhere before the outermost loop
const __m128i vLIMIT = _mm_set1_epi16(LIMIT + SHRT_MIN); // signed version of LIMIT
const __m128i vOFFSET = _mm_set1_epi16(SHRT_MIN); // offset for uint16_t -> int16_t conversion
// ...
bool good = true;
for(std::size_t k = 0; k < 128; k += 8) {
__m128i v, va, vb; // iterate through a, b, 8 elements at a time
int mask;
va = _mm_loadu_si128(&a[k]); // get 8 elements from a[k], b[k]
vb = _mm_loadu_si128(&b[k]);
v = _mm_add_epi16(va, vb); // add a and b vectors
v = _mm_add_epi16(v, vOFFSET); // subtract 32768 to make signed
v = _mm_cmpgt_epi16(v, vLIMIT); // compare against LIMIT
mask = _mm_maskmove_epi8(v); // get comparison results as 16 bit mask
if (mask != 0) { // if any value exceeded limit
good = false; // clear good flag and exit loop
break;
}
Warning: untested code - may need debugging, but the general approach should be sound.
You've got the most efficient access pattern for v1, but you are sequentially scanning through all of v2 for each iteration of the outer loop. This is very inefficient, because v2 access will continually cause (L2 and probably also L3) cache misses.
A better access pattern is to increase the loop nesting, so that outer loops stride through v1 and v2, and inner loops process elements within a subsegment of both v1 and v2 that's small enough to fit in cache.
Basically, instead of
for(size_t i1 = 0; i1 < v1.size(); ++i1) { //v1.size() and v2.size() about 20000
for(size_t i2 = 0; i2 < v2.size(); ++i2) {
Do
for(size_t i2a = 0; i2a < v2.size(); i2a += 32) {
for(size_t i1 = 0; i1 < v1.size(); ++i1) {
for(size_t i2 = i2a; i2 < v2.size() && i2 < i2a + 32; ++i2) {
Or
size_t i2a = 0;
// handle complete blocks
for(; i2a < v2.size() - 31; i2a += 32) {
for(size_t i1 = 0; i1 < v1.size(); ++i1) {
for(size_t i2 = i2a; i2 < i2a + 32; ++i2) {
}
}
}
// handle leftover partial block
for(size_t i1 = 0; i1 < v1.size(); ++i1) {
for(size_t i2 = i2a; i2 < v2.size(); ++i2) {
}
}
This way, a chunk of 32 * 128 * sizeof (uint16_t) bytes, or 8kB, will be loaded from v2 into cache, and then reused 20,000 times.
This improvement is orthogonal to SIMD (SSE) vectorization. It will interact with thread-based parallelism, but probably in a good way.
First, One simple optimization can be this, but the compiler could do this by itself so i'm not sure how much it could improve:
for(std::size_t k = 0; k < 128 && good; ++k)
{
good = a[k] + b[k] <= LIMIT;
}
Second, i think it could better to keep the good result in a second vector because any
processing involved with i1 and i2 could break the CPU cache.
Third, and this could be major optimization, i think you can rewrite the second for loop as this:
for(size_t i2 = i1; i2 < v2.size(); ++i2) since you are using + operations for a and b vectors which is commutative so the result of i1 and i2 will be the same as i2 and i1.
For this you need to have the v1 and v2 the same size. If the size is different you need to write the iteration in a different way.
Fort, As far as i can see you are processing two matrices, it would be better to keep a vector of elements rather than a vector of vectors.
Hope this helps.
Razvan.
A few suggestions:
As suggested in the comments, replace the inner 128 element vector with an array for better memory locality.
This code looks highly parallelizable, have you tried that? You could split the combinations for filtering across all available cores and then rebalance the collected work and split the processing across all cores.
I implemented a version using arrays for the inner 128 elements, PPL for parallelization (requires VS 2012 or higher) and a bit of SSE code for the filtering and got a pretty significant speedup. Depending on what exactly the 'further processing' involves there may be benefits to structuring things slightly differently (in this example I don't rebalance the work after filtering for example).
Update: I implemented the cache blocking suggested by Ben Voigt and got a bit more of a speed up.
#include <vector>
#include <array>
#include <random>
#include <limits>
#include <cstdint>
#include <iostream>
#include <numeric>
#include <chrono>
#include <iterator>
#include <ppl.h>
#include <immintrin.h>
using namespace std;
using namespace concurrency;
namespace {
const int outerVecSize = 20000;
const int innerVecSize = 128;
const int LIMIT = 16000;
auto engine = default_random_engine();
};
typedef vector<uint16_t> InnerVec;
typedef array<uint16_t, innerVecSize> InnerArr;
template <typename Cont> void randomFill(Cont& c) {
// We want approx 0.1% to pass filter, the mean and standard deviation are chosen to get close to that
static auto dist = normal_distribution<>(LIMIT / 4.0, LIMIT / 4.6);
generate(begin(c), end(c), [] {
auto clamp = [](double x, double minimum, double maximum) { return min(max(minimum, x), maximum); };
return static_cast<uint16_t>(clamp(dist(engine), 0.0, numeric_limits<uint16_t>::max()));
});
}
void resizeInner(InnerVec& v) { v.resize(innerVecSize); }
void resizeInner(InnerArr& a) {}
template <typename Inner> Inner generateRandomInner() {
auto inner = Inner();
resizeInner(inner);
randomFill(inner);
return inner;
}
template <typename Inner> vector<Inner> generateRandomInput() {
auto outer = vector<Inner>(outerVecSize);
generate(begin(outer), end(outer), generateRandomInner<Inner>);
return outer;
}
void Report(const chrono::high_resolution_clock::duration elapsed, size_t in1Size, size_t in2Size,
const int passedFilter, const uint32_t specialValue) {
cout << passedFilter << "/" << in1Size* in2Size << " ("
<< 100.0 * (double(passedFilter) / double(in1Size * in2Size)) << "%) passed filter\n";
cout << specialValue << "\n";
cout << "Elapsed time = " << chrono::duration_cast<chrono::milliseconds>(elapsed).count() << "ms" << endl;
}
void TestOriginalVersion() {
cout << __FUNCTION__ << endl;
engine.seed();
const auto v1 = generateRandomInput<InnerVec>();
const auto v2 = generateRandomInput<InnerVec>();
int passedFilter = 0;
uint32_t specialValue = 0;
auto startTime = chrono::high_resolution_clock::now();
for (size_t i1 = 0; i1 < v1.size(); ++i1) { // v1.size() and v2.size() about 20000
for (size_t i2 = 0; i2 < v2.size(); ++i2) {
const vector<uint16_t>& a = v1[i1];
const vector<uint16_t>& b = v2[i2];
bool good = true;
for (std::size_t k = 0; k < 128; ++k) {
if (static_cast<int>(a[k]) + static_cast<int>(b[k])
> LIMIT) { // LIMIT is a const uint16_t: approx 16000
good = false;
break;
}
}
if (!good) continue;
// Further processing involving i1 and i2
++passedFilter;
specialValue += inner_product(begin(a), end(a), begin(b), 0);
}
}
auto endTime = chrono::high_resolution_clock::now();
Report(endTime - startTime, v1.size(), v2.size(), passedFilter, specialValue);
}
bool needsProcessing(const InnerArr& a, const InnerArr& b) {
static_assert(sizeof(a) == sizeof(b) && (sizeof(a) % 16) == 0, "Array size must be multiple of 16 bytes.");
static const __m128i mmLimit = _mm_set1_epi16(LIMIT);
static const __m128i mmLimitPlus1 = _mm_set1_epi16(LIMIT + 1);
static const __m128i mmOnes = _mm_set1_epi16(-1);
auto to_m128i = [](const uint16_t* p) { return reinterpret_cast<const __m128i*>(p); };
return equal(to_m128i(a.data()), to_m128i(a.data() + a.size()), to_m128i(b.data()), [&](const __m128i& a, const __m128i& b) {
// avoid overflow due to signed compare by clamping sum to LIMIT + 1
const __m128i clampSum = _mm_min_epu16(_mm_adds_epu16(a, b), mmLimitPlus1);
return _mm_test_all_zeros(_mm_cmpgt_epi16(clampSum, mmLimit), mmOnes);
});
}
void TestArrayParallelVersion() {
cout << __FUNCTION__ << endl;
engine.seed();
const auto v1 = generateRandomInput<InnerArr>();
const auto v2 = generateRandomInput<InnerArr>();
combinable<int> passedFilterCombinable;
combinable<uint32_t> specialValueCombinable;
auto startTime = chrono::high_resolution_clock::now();
const size_t blockSize = 64;
parallel_for(0u, v1.size(), blockSize, [&](size_t i) {
for (const auto& b : v2) {
const auto blockBegin = begin(v1) + i;
const auto blockEnd = begin(v1) + min(v1.size(), i + blockSize);
for (auto it = blockBegin; it != blockEnd; ++it) {
const InnerArr& a = *it;
if (!needsProcessing(a, b))
continue;
// Further processing involving a and b
++passedFilterCombinable.local();
specialValueCombinable.local() += inner_product(begin(a), end(a), begin(b), 0);
}
}
});
auto passedFilter = passedFilterCombinable.combine(plus<int>());
auto specialValue = specialValueCombinable.combine(plus<uint32_t>());
auto endTime = chrono::high_resolution_clock::now();
Report(endTime - startTime, v1.size(), v2.size(), passedFilter, specialValue);
}
int main() {
TestOriginalVersion();
TestArrayParallelVersion();
}
On my 8 core system I see a pretty good speedup, your results will vary depending on how many cores you have etc.
TestOriginalVersion
441579/400000000 (0.110395%) passed filter
2447300015
Elapsed time = 12525ms
TestArrayParallelVersion
441579/400000000 (0.110395%) passed filter
2447300015
Elapsed time = 657ms