reduction with string type in OpenMP - c++

I am use OpenMP to parallize a for loop like so
std::stringType = "somevalue";
#pragma omp parallel for reduction(+ : stringType)
//a for loop here which every loop appends a string to stringType
The only way I can think to do this is to convert to an int representation in some way first and then convert back at the end but this has obvious overhead. Is there any better ways to perform this style of operation?

As mentioned in comments, reduction assumes that the operation is associative and commutative. The values may be computed in any order and be "accumulated" through any kind of partial results and the final result will be the same.
There is no guarantee that an OpenMP for loop will distribute contiguous iterations to each thread unless the loop schedule explicitly requests that. There is no guarantee either that continuous blocks will be distributed by increasing thread number (i.e. thread #0 might go through iterations 1000-1999 while thread #1 goes through 0-999). If you need that behavior, then you should define you own schedule.
Something like:
int N=1000;
std::string globalString("initial value");
#pragma omp parallel shared(N,stringType)
{
std::string localString; //Empty string
// Set schedule
int iterTo, iterFrom;
iterFrom = omp_get_thread_num() * (N / omp_get_num_threads());
if (omp_get_num_threads() == omp_get_thread_num()+1)
iterTo = N;
else
iterTo = (1+omp_get_thread_num()) * (N / omp_get_num_threads());
// Loop - concatenate a number of neighboring values in the right order
// No #pragma omp for: each thread goes through the loop, but loop
// boundaries change according to the thread ID
for (int ii=iterTo; ii<iterTo ; ii++){
localString += get_some_string(ii);
}
// Dirty trick to concatenate strings from all threads in the good order
for (int ii=0;ii<omp_get_num_threads();ii++){
#pragma omp barrier
if (ii==omp_get_thread_num())
globalString += localString;
}
}
A better way would be to have a shared array of std::string, each thread using one as a local accumulator. At the end, a single thread can run the concatenation part (and avoid the dirty trick and all its overhead-heavy barrier calls).

Related

Aspects that affects the efficiency of OpenMP parallelism

I would like to parallel a big loop using OpenMP to improve its efficiency. Here is the main part of the toy code:
vector<int> config;
config.resize(indices.size());
omp_set_num_threads(2);
#pragma omp parallel for schedule(static, 5000) firstprivate(config)
for (int i = 0; i < 10000; ++i) { // the outer loop that I would like to parallel
#pragma omp simd
for (int j = 0; j < indices.size(); ++j) { // pick some columns from a big ref_table
config[j] = ref_table[i][indices[j]];
}
int index = GetIndex(config); // do simple computations on the picked values to get the index
#pragma omp atomic
result[index]++;
}
Then I found I cannot get improvements in efficiency if I use 2, 4, or 8 threads. The execution time of the parallel versions is generally greater than that of the sequential version. The outer loop has 10000 iterations and they are independent so I want multiple threads to execute those iterations in parallel.
I guess the reasons for performance decrease maybe include: private copies of config? or, random access of ref_table? or, expensive atomic operation? So what are the exact reasons for the performance decrease? More importantly, how can I get a shorter execution time?
Private copies of config or, random access of ref_tables are not problematic, I think the workload is very small, there are 2 potential issues which prevent efficient parallelization:
atomic operation is too expensive.
overheads are bigger than workload (it simply means that it is not worth parallelizing with OpenMP)
I do not know which one is more significant in your case, so it is worth trying to get rid of atomic operation. There are 2 cases:
a) If the results array is zero initialized you have to use:
#pragma omp parallel for reduction(+:result[0:N]) schedule(static, 5000) firstprivate(config) where N is the size of result array and delete #pragma omp atomic. Note that this works on OpenMP 4.5 or later. It is also worth removing #parama omp simd for a loop of 2-10 iterations. So, your code should look like this:
#pragma omp parallel for reduction(+:result[0:N]) schedule(static, 5000) firstprivate(config)
for (int i = 0; i < 10000; ++i) { // the outer loop that I would like to parallel
for (int j = 0; j < indices.size(); ++j) { // pick some columns from a big ref_table
config[j] = ref_table[i][indices[j]];
}
int index = GetIndex(config); // do simple computations on the picked values to get the index
result[index]++;
}
b) If the result array is not zero initialized the solution is very similar, but use a temporary zero initialized array in the loop and after that add it to result array.
If the speed will not increase then your code is not worth parallelizing with OpenMP on your hardware.

Parallelizing nested loops with OpenMP

Perhaps the solution to my problem is obvious for some on with exprience with openmp, but I don't have it. I want to accelerate the following subroutine using openmp:
void Build_ERIS(vector<double> &eris, vector<Atomic_Orbital> &Basis)
{
int basis_size = Basis.size();
int m = basis_size*(basis_size+1)/2;
eris.resize(m*(m+1)/2);
bool compute;
std::fill(eris.begin(), eris.end(), 0);
int i_orbital,j_orbital, k_orbital,l_orbital, i_primitive, j_primitive, k_primitive,l_primitive,ij,kl, ijkl,ijij,klkl;
#pragma omp parallel
{
#pragma omp for ordered
for(i_orbital=0; i_orbital<basis_size; i_orbital++){
for(j_orbital=0; j_orbital<i_orbital+1; j_orbital++){
ij = i_orbital*(i_orbital+1)/2 + j_orbital;
for(k_orbital=0; k_orbital<basis_size; k_orbital++){
for(l_orbital=0; l_orbital<k_orbital+1; l_orbital++){
kl = k_orbital*(k_orbital+1)/2 + l_orbital;
if (ij >= kl) {
ijkl = composite_index(i_orbital,j_orbital,k_orbital,l_orbital);
ijij = composite_index(i_orbital,j_orbital,i_orbital,j_orbital);
klkl = composite_index(k_orbital,l_orbital,k_orbital,l_orbital);
for(i_primitive=0; i_primitive<Basis[i_orbital].contraction.size; i_primitive++)
for(j_primitive=0; j_primitive<Basis[j_orbital].contraction.size; j_primitive++)
for(k_primitive=0; k_primitive<Basis[k_orbital].contraction.size; k_primitive++)
for(l_primitive=0; l_primitive<Basis[l_orbital].contraction.size; l_primitive++)
eris[ijkl] +=
normconst(Basis[i_orbital].contraction.exponent[i_primitive],Basis[i_orbital].angular.l, Basis[i_orbital].angular.m, Basis[i_orbital].angular.n)*
normconst(Basis[j_orbital].contraction.exponent[j_primitive],Basis[j_orbital].angular.l, Basis[j_orbital].angular.m, Basis[j_orbital].angular.n)*
normconst(Basis[k_orbital].contraction.exponent[k_primitive],Basis[k_orbital].angular.l, Basis[k_orbital].angular.m, Basis[k_orbital].angular.n)*
normconst(Basis[l_orbital].contraction.exponent[l_primitive],Basis[l_orbital].angular.l, Basis[l_orbital].angular.m, Basis[l_orbital].angular.n)*
Basis[i_orbital].contraction.coef[i_primitive]*
Basis[j_orbital].contraction.coef[j_primitive]*
Basis[k_orbital].contraction.coef[k_primitive]*
Basis[l_orbital].contraction.coef[l_primitive]*
ERI_int(Basis[i_orbital].contraction.center.x, Basis[i_orbital].contraction.center.y, Basis[i_orbital].contraction.center.z, Basis[i_orbital].contraction.exponent[i_primitive],Basis[i_orbital].angular.l, Basis[i_orbital].angular.m, Basis[i_orbital].angular.n,
Basis[j_orbital].contraction.center.x, Basis[j_orbital].contraction.center.y, Basis[j_orbital].contraction.center.z, Basis[j_orbital].contraction.exponent[j_primitive],Basis[j_orbital].angular.l, Basis[j_orbital].angular.m, Basis[j_orbital].angular.n,
Basis[k_orbital].contraction.center.x, Basis[k_orbital].contraction.center.y, Basis[k_orbital].contraction.center.z, Basis[k_orbital].contraction.exponent[k_primitive],Basis[k_orbital].angular.l, Basis[k_orbital].angular.m, Basis[k_orbital].angular.n,
Basis[l_orbital].contraction.center.x, Basis[l_orbital].contraction.center.y, Basis[l_orbital].contraction.center.z, Basis[l_orbital].contraction.exponent[l_primitive],Basis[l_orbital].angular.l, Basis[l_orbital].angular.m, Basis[l_orbital].angular.n);
/**/
}
}
}
}
}
}
}
My concern is regarding the best way of be sure that after the openmp parallelization, the computation of the reductions in eris[ijkl], still giving the same values that the serial version of the routine? How can I do a loops fusion in a way that is numerically safe?
Several things I see.
1) #pragma for ordered means: execute every single one of the iterations of this loop in order. This essentially means that while you're executing "in parallel," all of your work will be done in serial. Remove it.
2) You have not declared any of your variables shared or private. Note that all variables by default will be shared, so in your case ij and kl for instance will be accessible by any thread working on any iteration. You can no doubt see how this would cause a race condition if, say, iteration 100 changed variable ij while iteration 1 thought it was using it.
3) Your variable eris[ijkl] as you rightly noted must be reduced properly. If ijkl can never be the same value for two different iterations in your i_orbital loop, then you're fine as-is; no two threads will ever be changing the same variable eris[ijkl] potentially at the same time. If it can be the same value, then you have to carefully handle reduction on the array.
4) Here's what you should work with for starters. This is assuming that ijkl will never be the same value for two different iterations, and your functions do not take in any non-constant references (potentially changing what I'm assuming input variables to output variables).
#pragma omp parallel for private(i_orbital, j_orbital, ij, k_orbital, l_orbital, kl, ijkl, ijij, klkl, i_primitive, j_primitive, k_primitive, l_primitive)

OpenMP parallel code has not the same output as the serial code

I had to change and extend my algorithm for some signal analysis (using the polyfilterbank technique) and couldn't use my old OpenMP code, but in the new code the results are not as expected (the results in the beginning positions in the array are somehow incorrect in comparison with a serial run [serial code shows the expected result]).
So in the first loop tFFTin I have some FFT data, which I'm multiplicating with a window function.
The goal is that a thread runs the inner loops for each polyphase factor. To avoid locks I use the reduction pragma (no complex reduction is defined by standard, so I use my one where each thread's omp_priv variable gets initialized with the omp_orig [so with tFFTin]). The reason I'm using the ordered pragma is that the results should be added to the output vector in an ordered way.
typedef std::complex<float> TComplexType;
typedef std::vector<TComplexType> TFFTContainer;
#pragma omp declare reduction(complexMul:TFFTContainer:\
transform(omp_in.begin(), omp_in.end(),\
omp_out.begin(), omp_out.begin(),\
std::multiplies<TComplexType>()))\
initializer (omp_priv(omp_orig))
void ConcreteResynthesis::ApplyPolyphase(TFFTContainer& tFFTin, TFFTContainer& tFFTout, TWindowContainer& tWindow, *someparams*) {;
#pragma omp parallel for shared(tWindow) firstprivate(sFFTParams) reduction(complexMul: tFFTin) ordered if(iFFTRawDataLen>cMinParallelSize)
for (int p = 0; p < uPolyphase; ++p) {
int iPolyphaseOffset = p * uFFTLength;
for (int i = 0; i < uFFTLength; ++i) {
tFFTin[i] *= tWindow[iPolyphaseOffset + i]; ///< get FFT input data from raw data
}
#pragma omp ordered
{
//using the overlap and add method
for (int i = 0; i < sFFTParams.uFFTLength; ++i) {
pDataPool->GetFullSignalData(workSignal)[mSignalPos + iPolyphaseOffset + i] += tFFTin[i];
}
}
}
mSignalPos = mSignalPos + mStep;
}
Is there a race condition or something, which makes wrong outputs at the beginning? Or do I have some logic error?
Another issue is, I don't really like my solution with using the ordered pragma, is there a better approach( i tried to use for this also the reduction-model, but the compiler doesn't allow me to use a pointer type for that)?
I think your problem is that you have implemented a very cool custom reduction for tFFTin. But this reduction is applied at the end of the parallel region.
Which is after you use the data in tFFTin. Another thing is what H. Iliev mentions that the second iteration of the outer loop relies on data which is computed in the previous iteration - a classic dependency.
I think you should try parallelizing the inner loops.

C++ OpenMP: Split for loop in even chunks static and join data at the end

I'm trying to make a for loop multi-threaded in C++ so that the calculation gets divided to the multiple threads. Yet it contains data that needs to be joined together in the order as they are.
So the idea is to first join the small bits on many cores (25.000+ loops) and then join the combined data once more at the end.
std::vector<int> ids; // mappings
std::map<int, myData> combineData; // data per id
myData outputData; // combined data based on the mappings
myData threadData; // data per thread
#pragma parallel for default(none) private(data, threadData) shared(combineData)
for (int i=0; i<30000; i++)
{
threadData += combineData[ids[i]];
}
// Then here I would like to get all the seperate thread data and combine them in a similar manner
// I.e.: for each threadData: outputData += threadData
What would be the efficient and good way to approach this?
How can I schedule the openmp loop so that the scheduling is split evenly into chunks
For example for 2 threads:
[0, 1, 2, 3, 4, .., 14999] & [15000, 15001, 15002, 15003, 15004, .., 29999]
If there's a better way to join the data (which involves joining a lot of std::vectors together and some matrix math), yet preserve the order of additions pointers to that would help as well.
Added information
The addition is associative, though not commutative.
myData is not an intrinsic type. It's a class containing data as multiple std::vectors (and some data related to the Autodesk Maya API.)
Each cycle is doing a similar matrix multiplication to many points and adds these points to a vector (in theory the calculation time should stay roughly similar per cycle)
Basically it's adding mesh data (consisting of vectors of data) to eachother (combining meshes) though the order of the whole thing accounts for the index value of the vertices. The vertex index should be consistent and rebuildable.
This depends on a few properties of the the addition operator of myData. If the operator is both associative (A + B) + C = A + (B + C) as well as commutative A + B = B + A then you can use a critical section or if the data is plain old data (e.g. a float, int,...) a reduction.
However, if it's not commutative as you say (order of operation matters) but still associative you can fill an array with a number of elements equal to the number of threads of the combined data in parallel and then merge them in order in serial (see the code below. Using schedule(static) will split the chunks more or less evenly and with increasing thread number as you want.
If the operator is neither associative nor commutative then I don't think you can parallelize it (efficiently - e.g. try parallelizing a Fibonacci series efficiently).
std::vector<int> ids; // mappings
std::map<int, myData> combineData; // data per id
myData outputData; // combined data based on the mappings
myData *threadData;
int nthreads;
#pragma omp parallel
{
#pragma omp single
{
nthreads = omp_get_num_threads();
threadData = new myData[nthreads];
}
myData tmp;
#pragma omp for schedule(static)
for (int i=0; i<30000; i++) {
tmp += combineData[ids[i]];
}
threadData[omp_get_thread_num()] = tmp;
}
for(int i=0; i<nthreads; i++) {
outputData += threadData[i];
}
delete[] threadData;
Edit: I'm not 100% sure at this point if the chunks will assigned in order of increasing thread number with #pragma omp for schedule(static) (though I would be surprised if they are not). There is an ongoing discussion on this issue. Meanwhile, if you want to be 100% sure then instead of
#pragma omp for schedule(static)
for (int i=0; i<30000; i++) {
tmp += combineData[ids[i]];
}
you can do
const int nthreads = omp_get_num_threads();
const int ithread = omp_get_thread_num();
const int start = ithread*30000/nthreads;
const int finish = (ithread+1)*30000/nthreads;
for(int i = start; i<finish; i++) {
tmp += combineData[ids[i]];
}
Edit:
I found a more elegant way to fill in parallel but merge in order
#pragma omp parallel
{
myData tmp;
#pragma omp for schedule(static) nowait
for (int i=0; i<30000; i++) {
tmp += combineData[ids[i]];
}
#pragma omp for schedule(static) ordered
for(int i=0; i<omp_get_num_threads(); i++) {
#pragma omp ordered
outputData += tmp;
}
}
This avoids allocating data for each thread (threadData) and merging outside the parallel region.
If you really want to preserve the same order as in the serial case, then there is no other way than doing it serially. In that case you can maybe try to parallelize the operations done in operator+=.
If the operations can be done randomly, but the reduction of the blocks has a specific order , then it may be worth having a look at TBB parallel_reduce. It will require you to write more code, but if I remember well you can define complex custom reduction operations.
If the order of the operations doesn't matter, then your snippet is almost complete. What it lacks is possibly a critical construct to aggregate private data:
std::vector<int> ids; // mappings
std::map<int, myData> combineData; // data per id
myData outputData; // combined data based on the mappings
#pragma omp parallel
{
myData threadData; // data per thread
#pragma omp for nowait
for (int ii =0; ii < total_iterations; ii++)
{
threadData += combineData[ids[ii]];
}
#pragma omp critical
{
outputData += threadData;
}
#pragma omp barrier
// From here on you are ensured that every thread sees
// the correct value of outputData
}
The schedule of the for loop in this case is not important for the semantic. If the overload of operator+= is a relatively stable operation (in terms of the time needed to compute it), then you can use schedule(static) which divides the iterations evenly among threads. Otherwise you can resort to other scheduling to balance the computational burden (e.g. schedule(guided)).
Finally if myData is a typedef of an intrinsic type, then you can avoid the critical section and use a reduction clause:
#pragma omp for reduction(+:outputData)
for (int ii =0; ii < total_iterations; ii++)
{
outputData += combineData[ids[ii]];
}
In this case you don't need to declare anything explicitly as private.

C++ OpenMP slower than serial with default thread count

I try using OpenMP to parallel some for-loop of my program but failed to get significant speed improvement (actual degradation is observed). My target machine will have 4-6 cores and I currently rely on the OpenMP runtime to get the thread count for me, so I haven't tried any threadcount combination yet.
Target/Development platform: Windows 64bits
using MinGW64 4.7.2 (rubenvb build)
Sample output with OpenMP
Thread count: 4
Dynamic :0
OMP_GET_NUM_PROCS: 4
OMP_IN_PARALLEL: 1
5.612 // <- returned by omp_get_wtime()
5.627 (sec) // <- returned by clock()
Wall time elapsed: 5.62703
Sample output without OpenMP
2.415 (sec) // <- returned by clock()
Wall time elapsed: 2.415
How I measure the time
struct timeval start, end;
gettimeofday(&start, NULL);
#ifdef _OPENMP
double t1 = (double) clock();
double wt = omp_get_wtime();
sim->resetEnvironment(run);
tout << omp_get_wtime() - wt << std::endl;
timeEnd(tout, t1);
#else
double = (double) clock();
sim->resetEnvironment(run);
timeEnd(tout, t1);
#endif
gettimeofday(&end, NULL);
tout << "Wall time elapsed: "
<< ((end.tv_sec - start.tv_sec) * 1000000u + (end.tv_usec - start.tv_usec)) / 1.e6
<< std::endl;
The code
void Simulator::resetEnvironment(int run)
{
#pragma omp parallel
{
// (a)
#pragma omp for schedule(dynamic)
for (size_t i = 0; i < vector_1.size(); i++) // size ~ 20
reset(vector_1[i]);
#pragma omp for schedule(dynamic)
for (size_t i = 0; i < vector_2.size(); i++) // size ~ 2.3M
reset(vector_2[i]);
#pragma omp for schedule(dynamic)
for (size_t i = 0; i < vector_3.size(); i++) // size ~ 0.3M
reset(vector_3[i]);
for (int level = 0; level < level_count; level++) // (b) level = 3
{
#pragma omp for schedule(dynamic)
for (size_t i = 0; i < vector_4[level].size(); i++) // size ~500 - 1K
reset(vector_4[level][i]);
}
#pragma omp for schedule(dynamic)
for (long i = 0; i < populationSize; i++) // size ~7M
resetAgent(agents[i]);
} // end #parallel
} // end: Simulator::resetEnvironment()
Randomness
Inside reset() function calls, I used a RNG for seeding some agents for subsequent tasks.
Below is my RNG implementation, as I saw suggestion that using one RNG per per-thread for thread-safety.
class RNG {
public:
typedef std::mt19937 Engine;
RNG()
: real_uni_dist_(0.0, 1.0)
#ifdef _OPENMP
, engines()
#endif
{
#ifdef _OPENMP
int threads = std::max(1, omp_get_max_threads());
for (int seed = 0; seed < threads; ++seed)
engines.push_back(Engine(seed));
#else
engine_.seed(time(NULL));
#endif
} // end_ctor(RNG)
/** #return next possible value of the uniformed distribution */
double operator()()
{
#ifdef _OPENMP
return real_uni_dist_(engines[omp_get_thread_num()]);
#else
return real_uni_dist_(engine_);
#endif
}
private:
std::uniform_real_distribution<double> real_uni_dist_;
#ifdef _OPENMP
std::vector<Engine> engines;
#else
std::mt19937 engine_;
#endif
}; // end_class(RNG)
Question:
at (a), is it good to not using shortcut 'parallel for' to avoid the overhead of creating teams?
which part of my implementation can be the cause of degradation of performance?
Why the time reported by clock() and omp_get_wtime() are so similar, as I expected clock() would be somehow longer than omp_get_wtime()
[Edit]
at (b), my intention of including OpenMP directive in the inner loop is that the iteration for outer loop is so small (only 3) so I think I can skip that and go directly to the inner loop of looping the vector_4[level]. Is this thought inappropriate (or will this instruct the OpenMP to repeat the outer loop by 4 and hence actually looping the inner loop 12 instead of 3 (say the current thread count is 4)?
Thanks
If the measured wall-clock time (as reported by omp_get_wtime()) is close to the total CPU time (as reported by clock()), this could mean several different things:
the code is running single-threaded, but then the total CPU time will be lower than the wall-clock time;
a very high synchronisation and cache coherency overhead is present and it is huge in comparison to the actual work being done by the threads.
Your case is the second one and the reason is that you use schedule(dynamic). Dynamic scheduling should only be used in cases when each iteration can take a varying amount of time. If such iterations are statically distributed among the threads, work imbalance could occur. schedule(dynamic) takes care of this by giving each task (in your case each single iteration of the loop) to the next thread to finish its work and become idle. There is a certain overhead in synchronising the threads and bookkeeping the distribution of the work items and therefore it should only be used when the amount of work per thread is huge in comparison to the overhead. OpenMP allows you to group more iterations into iteration blocks and this number is specified like schedule(dynamic,100) - this would make each thread execute a block (or chunk) of 100 consecutive iterations before asking for a new one. The default block size for dynamic scheduling is 1, i.e. each vector element in processed by a separate thread. I have no idea how much processing is done in reset() and what kind of elements are there in vector_*, but given the serial run time it is not much at all.
Another source of slowdown is the loss of data locality when you use dynamic scheduling. Depending on the type of elements of those vectors, processing neighbouring elements by different threads leads to false sharing. That means that, e.g. vector_1[i] lies in the same cache line with some other elements of vector_1, e.g. vector_1[i-1] and vector_1[i+1]. When thread 1 modifies vector_1[i], the cache line is reloaded in all other cores that work on the neighbouring elements. If vector_1[] is only written to, the compiler can be smart enough to generate non-temporal stores (those bypass the cache) but it only works with vector stores and having each core do a single iteration at a time means no vectorisation at all. Data locality can be improved by either switching to static scheduling or, if reset() really takes varying amount of time, by setting a reasonable chunk size in the schedule(dynamic) clause. The best chunk size is usually dependent on the processor and often one has to tweak it in order to get the best performance.
So I would strongly suggest that you first switch to static scheduling by replacing all schedule(dynamic) to schedule(static) and then try to optimise further. You don't have to specify the chunk size in the static case as the default is simply the total number of iterations divided by the number of threads, i.e. each thread would get one contiguous block of iterations.
to answer your question:
1) in a) the usage of the "parallel" keyword is exact
2) Congrats, your impl of your lok-free PRNG looks fine
3) the error can come from all the OpenMP pragma you use in the inner loop . Parallel at the top level and avoid fine-grain and inner loop parallelism
4) In the code below, i used 'nowait' on each 'omp for', I put the omp directive out-of-the-loop in the vector_4 proccessing and put a barrier at the end to join all the thread and wiat for the end of all the job we spawn before !
// pseudo code
#pragma omp for schedule(dynamic) nowait
for (size_t i = 0; i < vector_1.size(); i++) // size ~ 20
reset(vector_1[i]);
#pragma omp for schedule(dynamic) nowait
for (size_t i = 0; i < vector_2.size(); i++) // size ~ 2.3M
reset(vector_2[i]);
#pragma omp for schedule(dynamic) nowait
for (size_t i = 0; i < vector_3.size(); i++) // size ~ 0.3M
reset(vector_3[i]);
#pragma omp for schedule(dynamic) nowait
for (int level = 0; level < level_count; level++)
{
for (size_t i = 0; i < vector_4[level].size(); i++) // size ~500 - 1K
reset(vector_4[level][i]);
}
#pragma omp for schedule(dynamic) nowait
for (long i = 0; i < populationSize; i++) // size ~7M
resetAgent(agents[i]);
#pragma omp barrier
A single threaded program will run faster than a multi-threaded one if the useful processing time is less than the overhead incurred by threads.
It is a good idea to determine what the overhead is by implementing a null function and then deciding whether it is a better solution.
From a performance point of view, threads are only useful if the useful processing time is significantly higher than the overhead that is incurred by threads and there are real cpus available to run the threads.