I'm trying to replicate the effects of false sharing using OpenMP as explained in the OpenMP introduction by Tim Mattson.
My program performs a straightforward numerical integration (see the link for the mathematical details) and I've implemented two versions, the first of which is supposed to be cache-friendly having each thread keep a local variable to accumulate its portion of the index space,
const auto num_slices = 100000000;
const auto num_threads = 4; // Swept from 1 to 9 threads
const auto slice_thickness = 1.0 / num_slices;
const auto slices_per_thread = num_slices / num_threads;
std::vector<double> partial_sums(num_threads);
#pragma omp parallel num_threads(num_threads)
{
double local_buffer = 0;
const auto thread_num = omp_get_thread_num();
for(auto slice = slices_per_thread * thread_num; slice < slices_per_thread * (thread_num + 1); ++slice)
local_buffer += func(slice * slice_thickness); // <-- Updates thread-exclusive buffer
partial_sums[thread_num] = local_buffer;
}
// Sum up partial_sums to receive final result
// ...
while the second version has each thread update an element in a shared std::vector<double>, causing each write to invalidate the cache lines on all other threads
// ... as above
#pragma omp parallel num_threads(num_threads)
{
const auto thread_num = omp_get_thread_num();
for(auto slice = slices_per_thread * thread_num; slice < slices_per_thread * (thread_num + 1); ++slice)
partial_sums[thread_num] += func(slice * slice_thickness); // <-- Invalidates caches
}
// Sum up partial_sums to receive final result
// ...
The problem is that I am unable to see any effects of false sharing whatsoever unless I turn off optimization.
Compiling my code (which has to account for a few more details than the snippets above) using GCC 8.1 without optimization (-O0) yields the results I naively expected while using full optimization (-O3) eliminates any difference in terms of performance between the two versions, as shown in the plot.
What's the explanation for this? Does the compiler actually eliminate false sharing? If not, how come the effect is so small when running the optimized code?
I'm on a Core-i7 machine using Fedora. The plot displays mean values whose sample standard deviations don't add any information to this question.
tl;dr: The compiler optimizes your second version into the first.
Consider the code within the loop of your second implementation - ignoring the OMP/multithreaded aspect of it for a moment.
You have increments of a value within an std::vector - which is necessarily located on the heap (well, up until and including in C++17 anyway). The compiler sees you're adding to a value on the heap within a loop; this is a typical candidate for optimization: It takes the heap access out of the loop, and uses a register as a buffer. It doesn't even need to read from the heap, since they're just additions - so it essentially arrives at your first solution.
See this happening on GodBolt (with a simplified example) - notice how the code for bar1() and bar2() is almost the same, with accumulation happening in registers.
Now, the fact that there's multi-threading and OMP involved doesn't change the above. If you were to use, say, std::atomic<double> instead of double, then it might have changed (and maybe not even then, if the compiler is smart enough).
Notes:
Thanks goes to #Evg for noticing a glaring mistake in the code of a previous version of this answer.
The compiler must be able to know that func() won't also change the value of your vector - or to decide that, for the purposes of addition, it shouldn't really matter.
This optimization could be seen as a Strength Reduction - from an operation on the heap to that on a register - but I'm not sure that term is in use for this case.
Related
I am trying to implement the parallel foreach loop for std::vector which runs the computations in optimal number of threads (number of cores minus 1 for main thread), however, my implementation seems to be not fast enough – it actually runs 6 times slower than the single-threaded one!
The thread instantiation is often blamed for being a bottleneck so I tried a larger vector, however, that did not seem to help.
I am currently stuck watching the parallel algorithm executed in 13000-20000 microseconds in a separate thread while single-threaded one is executed in 120-200 microseconds in the main thread and cannot figure out what I am doing wrong. Out of those 13-20 ms parallel algorithm runs for 8 or 9 are usually utilized to create thread, however, I can still see no reason for std::for_each running through 1/3 of the vector in a separate thread for several times longer than another std::for_each need to iterate through the whole vector.
#include <iostream>
#include <vector>
#include <thread>
#include <algorithm>
#include <chrono>
const unsigned int numCores = std::thread::hardware_concurrency();
const size_t numUse = numCores - 1;
struct foreach
{
inline static void go(std::function<void(uint32_t&)>&& func, std::vector<uint32_t>& cont)
{
std::vector<std::thread> vec;
vec.reserve(numUse);
std::vector<std::vector<uint32_t>::iterator> arr(numUse + 1);
size_t distance = cont.size() / numUse;
for (size_t i = 0; i < numUse; i++)
arr[i] = cont.begin() + i * distance;
arr[numUse] = cont.end();
for (size_t i = 0; i < numUse - 1; i++)
{
vec.emplace_back([&] { std::for_each(cont.begin() + i * distance, cont.begin() + (i + 1) * distance, func); });
}
vec.emplace_back([&] { std::for_each(cont.begin() + (numUse - 1) * distance, cont.end(), func); });
for (auto &d : vec)
{
d.join();
}
}
};
int main()
{
std::chrono::steady_clock clock;
std::vector<uint32_t> numbers;
for (size_t i = 0; i < 50000000; i++)
numbers.push_back(i);
std::chrono::steady_clock::time_point t0m = clock.now();
std::for_each(numbers.begin(), numbers.end(), [](uint32_t& value) { ++value; });
std::chrono::steady_clock::time_point t1m = clock.now();
std::cout << "Single-threaded run executes in " << std::chrono::duration_cast<std::chrono::microseconds>(t1m - t0m).count() << "mcs\n";
std::chrono::steady_clock::time_point t0s = clock.now();
foreach::go([](uint32_t& i) { ++i; }, numbers);
std::chrono::steady_clock::time_point t1s = clock.now();
std::cout << "Multi-threaded run executes in " << std::chrono::duration_cast<std::chrono::microseconds>(t1s - t0s).count() << "mcs\n";
getchar();
}
Is there a way I can optimize this and increase the performance?
The compiler I am using is Visual Studio 2017's one. Config is Release x86. I have also been advised to use a profiler and am currently figuring out how to use one.
I actually managed to get parallel code run faster than the regular one, however, this required vector of dozens of thousands of vectors of five elements. If anyone has advices on how to improve performance or where can I find better implementation to check its structure, that would be appreciated.
Thank you for providing some example code.
Getting good metrics (especially on parallel code) can be pretty tricky. Your metrics are tainted.
Use high_resolution_clock instead of steady_clock for profiling.
Don't include the thread startup time in your timing measurement. Thread launch/join is orders of magnitude longer than your actual work here. You should create the threads once and use condition variables to make them sleep until you signal them to work. This is not trivial, but it is essential that you don't measure the thread startup time.
Visual Studio has a profiler. You need to compile your code with release optimizations but also include the debug symbols (those are excluded in the default release configuration). I haven't looked into how to set this up manually because I usually use CMake and it sets up a RelWithDebInfo configuration automatically.
Another issue kind of related to having good metrics is that your "work" is just incrementing an integer. Is that really representative of the work your program is going to be doing? Increment is really fast. If you look at the assembly generated by your sequential version, everything gets inlined into a really short loop.
Lambdas have a very good chance of being inlined. But in your go function, you're casting the lambda to std::function. std::function has a very poor chance of being inlined.
So if you want to keep the chance of getting the lambda inlined, you have to do some template tricks:
template <typename FUNC>
inline static void go(FUNC&& func, std::vector<uint32_t>& cont)
By manually inlining your code (I moved the contents of the go function to main) and doing step 2 above, I was able to get the parallel version (4 threads on a hyperthreaded dual-core) to run in about 75% of the time. That's not particularly good scaling, but it's not bad considering that the original was already pretty fast. For a further optimization, I would use SIMD aka "vector" (different from std::vector except in the sense that they both relate to arrays) operations which will apply the increment to multiple array elements in one iteration.
You have a race condition here:
for (size_t i = 0; i < numUse - 1; i++)
{
vec.emplace_back([&] { std::for_each(cont.begin() + i * distance, cont.begin() + (i + 1) * distance, func); });
}
because you set the default lambda capture to capture-by-reference, the i variable is a reference and that could cause some threads to check the wrong range or too long of a range. You could do this: [&, i], but why risk shooting yourself in the foot again? Scott Meyers recommends against using default capture modes. Just do [&cont, &distance, &func, i]
UPDATE:
I think it's a fine idea to move your foreach to its own space. I think what you should do is separate the thread creation from task dispatch. That means you need some kind of signaling system (generally condition variables). You could look into thread pools.
An easy way to add threadpools is to use OpenMP, which Visual Studio 2017 has support for (OpenMP 2.0). A caveat is that there's no guarantee that the threads won't be created/destroyed during entry/exit of the parallel section (it's implementation dependent). So it trades off performance with ease of use.
If you can use C++17, it has a standard parallel for_each (the ExecutionPolicy overload). Most of the algorithmy standards functions do. https://en.cppreference.com/w/cpp/algorithm/for_each
As for using std::function you can use it, you just don't want your basic operation (the one that will be called 50,000,000 times) to be a std::function.
Bad:
void go(std::function<...>& func)
{
std::thread t(std::for_each(v.begin(), v.end(), func));
...
}
...
go([](int& i) { ++i; });
Good:
void go(std::function<...>& func)
{
std::thread t(func);
...
}
...
go([&v](){ std::for_each(v.begin(), v.end(), [](int& i) { ++i; })});
In the good version, the short inner lambda (i.e. ++i) gets inlined in the call to for_each. That's important because it gets called 50 million times. The call to the bigger lambda is not inlined (because it's converted to std::function) but that's ok because it only gets called once per thread.
I am new to TBB and try to do a simple exprement.
My data for functions are:
int n = 9000000;
int *data = new int[n];
I created a function, the first one without using TBB:
void _array(int* &data, int n) {
for (int i = 0; i < n; i++) {
data[i] = busyfunc(data[i])*123;
}
}
It takes 0.456635 seconds.
And also created a to function, the first one with using TBB:
void parallel_change_array(int* &data,int list_count) {
//Instructional example - parallel version
parallel_for(blocked_range<int>(0, list_count),
[=](const blocked_range<int>& r) {
for (int i = r.begin(); i < r.end(); i++) {
data[i] = busyfunc(data[i])*123;
}
});
}
It takes me 0.584889 seconds.
As for busyfunc(int m):
int busyfunc(int m)
{
m *= 32;
return m;
}
Can you tell me, why the function without using TBB spends less time, than if it is with TBB?
I think, the problem is that the functions are simple, and it's easy to calculate without using TBB.
First, the busyfunc() seems not so busy because 9M elements are computed in just half a second, which makes this example rather memory bound (uncached memory operations take orders of magnitude more cycles than arithmetic operations). Memory bound computations scale not as good as compute-bound, e.g. plain memory copying usually scales up to no more than, say, 4 times even running on much bigger number of cores/processors.
Also, memory bound programs are more sensitive to NUMA effects and since you allocated this array as contiguous memory using standard C++, it will be allocated by default entirely on the same memory node where the initialization occurs. This default can be altered by running with numactl -i all --.
And the last, but the most important thing is that TBB initializes threads lazily and pretty slowly. I guess you do not intend writing an application which exits after 0.5 seconds spent on parallel computation. Thus, a fair benchmark should take into account all the warm-up effects, which are expected in the real application. At the very least, it has to wait until all the threads are up and running before starting measurements. This answer suggests one way to do that.
[update] Please also refer to Alexey's answer for another possible reason lurking in compiler optimization differences.
In addition to Anton's asnwer, I recommend to check if the compiler was able to optimize the code equivalently.
For start, check performance of the TBB version executed by a single thread, without real parallelism. You can use tbb::global_control or tbb::task_scheduler_init to limit the number of threads to 1, e.g.
tbb::global_control ctl(tbb::global_control::max_allowed_parallelism, 1);
The overheads of thread creation, as well as cache locality or NUMA effects, should not play a role when all the code is executed by one thread. Therefore you should see approximately the same performance as for the no-TBB version. If you do, then you have a scalability issue, and Anton explained possible reasons.
However if you see that performance drops a lot, then it is a serial optimization issue. One of known reasons is that some compilers cannot optimize the loop over a blocked_range as good as they optimize the original loop; and it was also observed that storing r.end() into a local variable may help:
int rend = r.end();
for (int i = r.begin(); i < rend; i++) {
data[i] = busyfunc(data[i])*123;
}
I need to parallelize this loop, I though that to use was a good idea, but I never studied them before.
#pragma omp parallel for
for(std::set<size_t>::const_iterator it=mesh->NEList[vid].begin();
it!=mesh->NEList[vid].end(); ++it){
worst_q = std::min(worst_q, mesh->element_quality(*it));
}
In this case the loop is not parallelized because it uses iterator and the compiler cannot
understand how to slit it.
Can You help me?
OpenMP requires that the controlling predicate in parallel for loops has one of the following relational operators: <, <=, > or >=. Only random access iterators provide these operators and hence OpenMP parallel loops work only with containers that provide random access iterators. std::set provides only bidirectional iterators. You may overcome that limitation using explicit tasks. Reduction can be performed by first partially reducing over private to each thread variables followed by a global reduction over the partial values.
double *t_worst_q;
// Cache size on x86/x64 in number of t_worst_q[] elements
const int cb = 64 / sizeof(*t_worst_q);
#pragma omp parallel
{
#pragma omp single
{
t_worst_q = new double[omp_get_num_threads() * cb];
for (int i = 0; i < omp_get_num_threads(); i++)
t_worst_q[i * cb] = worst_q;
}
// Perform partial min reduction using tasks
#pragma omp single
{
for(std::set<size_t>::const_iterator it=mesh->NEList[vid].begin();
it!=mesh->NEList[vid].end(); ++it) {
size_t elem = *it;
#pragma omp task
{
int tid = omp_get_thread_num();
t_worst_q[tid * cb] = std::min(t_worst_q[tid * cb],
mesh->element_quality(elem));
}
}
}
// Perform global reduction
#pragma omp critical
{
int tid = omp_get_thread_num();
worst_q = std::min(worst_q, t_worst_q[tid * cb]);
}
}
delete [] t_worst_q;
(I assume that mesh->element_quality() returns double)
Some key points:
The loop is executed serially by one thread only, but each iteration creates a new task. These are most likely queued for execution by the idle threads.
Idle threads waiting at the implicit barrier of the single construct begin consuming tasks as soon as they are created.
The value pointed by it is dereferenced before the task body. If dereferenced inside the task body, it would be firstprivate and a copy of the iterator would be created for each task (i.e. on each iteration). This is not what you want.
Each thread performs partial reduction in its private part of the t_worst_q[].
In order to prevent performance degradation due to false sharing, the elements of t_worst_q[] that each thread accesses are spaced out so to end up in separate cache lines. On x86/x64 the cache line is 64 bytes, therefore the thread number is multiplied by cb = 64 / sizeof(double).
The global min reduction is performed inside a critical construct to protect worst_q from being accessed by several threads at once. This is for illustrative purposes only since the reduction could also be performed by a loop in the main thread after the parallel region.
Note that explicit tasks require compiler which supports OpenMP 3.0 or 3.1. This rules out all versions of Microsoft C/C++ Compiler (it only supports OpenMP 2.0).
Random-Access Container
The simplest solution is to just throw everything into a random-access container (like std::vector) and use the index-based loops that are favoured by OpenMP:
// Copy elements
std::vector<size_t> neListVector(mesh->NEList[vid].begin(), mesh->NEList[vid].end());
// Process in a standard OpenMP index-based for loop
#pragma omp parallel for reduction(min : worst_q)
for (int i = 0; i < neListVector.size(); i++) {
worst_q = std::min(worst_q, complexCalc(neListVector[i]));
}
Apart from being incredibly simple, in your situation (tiny elements of type size_t that can easily be copied) this is also the solution with the best performance and scalability.
Avoiding copies
However, in a different situation than yours you may have elements that aren't copied as easily (larger elements) or cannot be copied at all. In this case you can just throw the corresponding pointers in a random-access container:
// Collect pointers
std::vector<const nonCopiableObjectType *> neListVector;
for (const auto &entry : mesh->NEList[vid]) {
neListVector.push_back(&entry);
}
// Process in a standard OpenMP index-based for loop
#pragma omp parallel for reduction(min : worst_q)
for (int i = 0; i < neListVector.size(); i++) {
worst_q = std::min(worst_q, mesh->element_quality(*neListVector[i]));
}
This is slightly more complex than the first solution, still has the same good performance on small elements and increased performance on larger elements.
Tasks and Dynamic Scheduling
Since someone else brought up OpenMP Tasks in his answer, I want to comment on that to. Tasks are a very powerful construct, but they have a huge overhead (that even increases with the number of threads) and in this case just make things more complex.
For the min reduction the use of Tasks is never justified because the creation of a Task in the main thread costs much more than just doing the std::min itself!
For the more complex operation mesh->element_quality you might think that the dynamic nature of Tasks can help you with load-balancing problems, in case that the execution time of mesh->element_quality varies greatly between iterations and you don't have enough iterations to even it out. But even in that case, there is a simpler solution: Simply use dynamic scheduling by adding the schedule(dynamic) directive to your parallel for line in one of my previous solutions. It achieves the same behaviour which far less overhead.
Solving the following exercise:
Write three different versions of a program to print the elements of
ia. One version should use a range for to manage the iteration, the
other two should use an ordinary for loop in one case using subscripts
and in the other using pointers. In all three programs write all the
types directly. That is, do not use a type alias, auto, or decltype to
simplify the code.[C++ Primer]
a question came up: Which of these methods for accessing array is optimized in terms of speed and why?
My Solutions:
Foreach Loop:
int ia[3][4]={{1,2,3,4},{5,6,7,8},{9,10,11,12}};
for (int (&i)[4]:ia) //1st method using for each loop
for(int j:i)
cout<<j<<" ";
Nested for loops:
for (int i=0;i<3;i++) //2nd method normal for loop
for(int j=0;j<4;j++)
cout<<ia[i][j]<<" ";
Using pointers:
int (*i)[4]=ia;
for(int t=0;t<3;i++,t++){ //3rd method. using pointers.
for(int x=0;x<4;x++)
cout<<(*i)[x]<<" ";
Using auto:
for(auto &i:ia) //4th one using auto but I think it is similar to 1st.
for(auto j:i)
cout<<j<<" ";
Benchmark result using clock()
1st: 3.6 (6,4,4,3,2,3)
2nd: 3.3 (6,3,4,2,3,2)
3rd: 3.1 (4,2,4,2,3,4)
4th: 3.6 (4,2,4,5,3,4)
Simulating each method 1000 times:
1st: 2.29375 2nd: 2.17592 3rd: 2.14383 4th: 2.33333
Process returned 0 (0x0) execution time : 13.568 s
Compiler used:MingW 3.2 c++11 flag enabled. IDE:CodeBlocks
I have some observations and points to make and I hope you get your answer from this.
The fourth version, as you mention yourself, is basically the same as the first version. auto can be thought of as only a coding shortcut (this is of course not strictly true, as using auto can result in getting different types than you'd expected and therefore result in different runtime behavior. But most of the time this is true.)
Your solution using pointers is probably not what people mean when they say that they are using pointers! One solution might be something like this:
for (int i = 0, *p = &(ia[0][0]); i < 3 * 4; ++i, ++p)
cout << *p << " ";
or to use two nested loops (which is probably pointless):
for (int i = 0, *p = &(ia[0][0]); i < 3; ++i)
for (int j = 0; j < 4; ++j, ++p)
cout << *p << " ";
from now on, I'm assuming this is the pointer solution you've written.
In such a trivial case as this, the part that will absolutely dominate your running time is the cout. The time spent in bookkeeping and checks for the loop(s) will be completely negligible comparing to doing I/O. Therefore, it won't matter which loop technique you use.
Modern compilers are great at optimizing such ubiquitous tasks and access patterns (iterating over an array.) Therefore, chances are that all these methods will generate exactly the same code (with the possible exception of the pointer version, which I will talk about later.)
The performance of most codes like this will depend more on the memory access pattern rather than how exactly the compiler generates the assembly branch instructions (and the rest of the operations.) This is because if a required memory block is not in the CPU cache, it's going to take a time roughly equivalent of several hundred CPU cycles (this is just a ballpark number) to fetch those bytes from RAM. Since all the examples access memory in exactly the same order, their behavior in respect to memory and cache will be the same and will have roughly the same running time.
As a side note, the way these examples access memory is the best way for it to be accessed! Linear, consecutive and from start to finish. Again, there are problems with the cout in there, which can be a very complicated operation and even call into the OS on every invocation, which might result, among other things, an almost complete deletion (eviction) of everything useful from the CPU cache.
On 32-bit systems and programs, the size of an int and a pointer are usually equal (both are 32 bits!) Which means that it doesn't matter much whether you pass around and use index values or pointers into arrays. On 64-bit systems however, a pointer is 64 bits but an int will still usually be 32 bits. This suggests that it is usually better to use indexes into arrays instead of pointers (or even iterators) on 64-bit systems and programs.
In this particular example, this is not significant at all though.
Your code is very specific and simple, but the general case, it is almost always better to give as much information to the compiler about your code as possible. This means that you must use the narrowest, most specific device available to you to do a job. This in turn means that a generic for loop (i.e. for (int i = 0; i < n; ++i)) is worse than a range-based for loop (i.e. for (auto i : v)) for the compiler, because in the latter case the compiler simply knows that you are going to iterate over the whole range and not go outside of it or break out of the loop or something, while in the generic for loop case, specially if your code is more complex, the compiler cannot be sure of this and has to insert extra checks and tests to make sure the code executes as the C++ standard says it should.
In many (most?) cases, although you might think performance matters, it does not. And most of the time you rewrite something to gain performance, you don't gain much. And most of the time the performance gain you get is not worth the loss in readability and maintainability that you sustain. So, design your code and data structures right (and keep performance in mind) but avoid this kind of "micro-optimization" because it's almost always not worth it and even harms the quality of the code too.
Generally, performance in terms of speed is very hard to reason about. Ideally you have to measure the time with real data on real hardware in real working conditions using sound scientific measuring and statistical methods. Even measuring the time it takes a piece of code to run is not at all trivial. Measuring performance is hard, and reasoning about it is harder, but these days it is the only way of recognizing bottlenecks and optimizing the code.
I hope I have answered your question.
EDIT: I wrote a very simple benchmark for what you are trying to do. The code is here. It's written for Windows and should be compilable on Visual Studio 2012 (because of the range-based for loops.) And here are the timing results:
Simple iteration (nested loops): min:0.002140, avg:0.002160, max:0.002739
Simple iteration (one loop): min:0.002140, avg:0.002160, max:0.002625
Pointer iteration (one loop): min:0.002140, avg:0.002160, max:0.003149
Range-based for (nested loops): min:0.002140, avg:0.002159, max:0.002862
Range(const ref)(nested loops): min:0.002140, avg:0.002155, max:0.002906
The relevant numbers are the "min" times (over 2000 runs of each test, for 1000x1000 arrays.) As you see, there is absolutely no difference between the tests. Note that you should turn on compiler optimizations or test 2 will be a disaster and cases 4 and 5 will be a little worse than 1 and 3.
And here are the code for the tests:
// 1. Simple iteration (nested loops)
unsigned sum = 0;
for (unsigned i = 0; i < gc_Rows; ++i)
for (unsigned j = 0; j < gc_Cols; ++j)
sum += g_Data[i][j];
// 2. Simple iteration (one loop)
unsigned sum = 0;
for (unsigned i = 0; i < gc_Rows * gc_Cols; ++i)
sum += g_Data[i / gc_Cols][i % gc_Cols];
// 3. Pointer iteration (one loop)
unsigned sum = 0;
unsigned * p = &(g_Data[0][0]);
for (unsigned i = 0; i < gc_Rows * gc_Cols; ++i)
sum += *p++;
// 4. Range-based for (nested loops)
unsigned sum = 0;
for (auto & i : g_Data)
for (auto j : i)
sum += j;
// 5. Range(const ref)(nested loops)
unsigned sum = 0;
for (auto const & i : g_Data)
for (auto const & j : i)
sum += j;
It has many factors affecting it:
It depends on the compiler
It depends on the compiler flags used
It depends on the computer used
There is only one way to know the exact answer: measuring the time used when dealing with huge arrays (maybe from a random number generator) which is the same method you have already done except that the array size should be at least 1000x1000.
I have a function that I eventually want to parallelize.
Currently, I call things in a for loop.
double temp = 0;
int y = 123; // is a value set by other code
for(vector<double>::iterator i=data.begin(); i != data.end(); i++){
temp += doStuff(i, y);
}
doStuff needs to know how far down the list it is. So I use i - data.begin() to calculate.
Next, I'd like to use the stl::for_each function instead. My challenge is that I need to pass the address of my iterator and the value of y. I've seen examples of using bind2nd to pass a parameter to the function, but how can I pass the address of the iterator as the first parameter?
The boost FOREACH functions also looks like a possibility, however I do not know if it will parallelize auto-magically like the STL version does.
Thoughts, ideas, suggestions?
If you want real parallelization here, use
GCC with tree vectorization optimization on (-O3) and SIMD (e.g. -march=native to get SSE support). If the operation (dostuff) is non-trivial, you could opt to do it ahead of time (std::transform or std::for_each) and accumulate next (std::accumulate) since the accumulation will be optimized like nothing else on SSE instructions!
void apply_function(double& value)
{
value *= 3; // just a sample...
}
// ...
std::vector<double> data(1000);
std::for_each(data.begin(), data.end(), &apply_function);
double sum = std::accumulate(data.begin(), data.end(), 0);
Note that though this will not actually run on multiple threads, the performance increase will be massive since SSE4 instructions can handle many floating operations *in parallell _on a single core_ .
If you wanted true parallelism, use one of the following
GNU Parallel Mode
Compile with g++ -fopenmp -D_GLIBCXX_PARALLEL:
__gnu_parallel::accumulate(data.begin(), data.end(), 0.0);
OpenMP directly
Compile with g++ -fopenmp
double sum = 0.0;
#pragma omp parallel for reduction (+:sum)
for (size_t i=0; i<data.end(); i++)
{
sum += do_stuff(i, data[i]);
}
This will result in the loop being parallelized into as many threads (OMP team) as there are (logical) CPU cores on the actual machine, and the result 'magically' combined and synchronized.
Final remarks:
You can simulate the binary function for for_each by using a stateful function object. This is not exactly recommended practice. It will also appear to be very inefficient (when compiling without optimization, it is). This is due to the fact that function objects are passed by value thoughout the STL. However, it is reasonable to expect a compiler to completely optimize the potential overhead of that away, especially for simple cases like the following:
struct myfunctor
{
size_t index;
myfunctor() : index(0) {}
double operator()(const double& v) const
{
return v * 3; // again, just a sample
}
};
// ...
std::for_each(data.begin(), data.end(), myfunctor());
temp += doStuff( i, y ); cannot be auto parallelized. The operator += doesn't play well with concurrency.
Further the stl algorithms don't parallelize anything. Both Visual Studio and GCC have parallel algorithms similar to std::for_each. If that is what you're after you'll have to use those.
OpenMP can auto parallelize for loops, but you have to use pragmas to tell the compiler when and how (it can't figure it out for you).
You may have confused parallelization with loop unrolling, which is a common optimization in std::for_each implementations.
This is fairly straightforward if you can change doStuff so that it takes the value of the current element separately from the index at which the current element is located. Consider:
struct context {
std::size_type _index;
int _y;
double _result;
};
context do_stuff_wrapper(context current, double value)
{
current._result += doStuff(current._index, value, current._y);
current._index++;
}
context c = { 0, 123, 0.0 };
context result = std::accumulate(data.begin(), data.end(), c, do_stuff_wrapper);
Note, however, that the Standard Library algorithms cannot "auto-parallelize" because the functions they call may have side effects (the compiler knows whether side effects are produced, but the library functions don't). If you want a parallelized loop, you'll have to go with a special-purpose parallelizing algorithms library, like PPL or TBB.