Strange ratio in speedup between release and debug builds in game "Life" - c++

I wrote classic game "Life" with 4-sided neighbors. When I run it in debug, it says:
Consecutive version: 4.2s
Parallel version: 1.5s
Okey, it's good. But if I run it in release, it says:
Consecutive version: 0.46s
Parallel version: 1.23s
Why? I run it on the computer with 4 kernels. I run 4 threads in parallel section. Answer is correct. But somethere is leak and I don't know that place. Can anybody help me?
I try to run it in Visual Studio 2008 and 2012. The results are same. OMP is enabled in the project settings.
To repeat my problem, you can find defined constant PARALLEL and set it to 1 or 0 to enable and disable OMP correspondingly. Answer will be in the out.txt (out.txt - right answer example). The input must be in in.txt (my input - in.txt). There are some russian symbols, you don't need to understand them, but the first number in in.txt means number of threads to run in parallel section (it's 4 in the example).
The main part is placed in the StartSimulation function. If you run the program, you will see some russian text with running time in the console.
The program code is big enough, so I add it with file hosting - main.cpp (l2 means "lab 2" for me)
Some comments about StartSimulation function. I cuts 2D surface with cells into small rectangles. It is done by AdjustKernelsParameters function.

I do not find the ratio so strange. Having multiple threads co-operate is a complex business and has overheads.
Access to shared memory needs to be serialized which normally involves some form of locking mechanism and contention between threads where they have to wait for the lock to be released.
Such shared variables need to be synchronized between the processor cores which can give significant slowdowns. Also the compiler needs to treat these critical areas differently as a "sequence point".
All this reduces the scope for per thread optimization both in the processor hardware and the compiler for each thread when it is working with the shared variable.
It seems that in this case the overheads of parallelization outweigh the optimization possibilities for the single threaded case.
If there were more work for each thread to do independently before needed to access a shared variable then these overheads would be less significant.

You are using guided loop schedule. This is a very bad choice given that you are dealing with a regular problem where each task can easily do exactly the same amount of work as any other if the domain is simply divided into chunks of equal size.
Replace schedule(guided) with schedule(static). Also employ sum reduction over livingCount instead of using locked increments:
#if PARALLEL == 1
#pragma omp parallel for schedule(static) num_threads(kernelsCount) \
reduction(+:livingCount)
#endif
for (int offsetI = 0; offsetI < n; offsetI += kernelPartSizeN)
{
for (int offsetJ = 0; offsetJ < m; offsetJ += kernelPartSizeM)
{
int boundsN = min(kernelPartSizeN, n - offsetI),
boundsM = min(kernelPartSizeM, m - offsetJ);
for (int kernelOffsetI = 0; kernelOffsetI < boundsN; ++kernelOffsetI)
{
for (int kernelOffsetJ = 0; kernelOffsetJ < boundsM; ++kernelOffsetJ)
{
if(BirthCell(offsetI + kernelOffsetI, offsetJ + kernelOffsetJ))
{
++livingCount;
}
}
}
}
}

Related

OMP accelerate the c++ dll but slow down the Unity

I writed a c++ native dll with heavy maths computation, and then put it into Unity engine to run.
The problem is that:
When I used OMP in c++, the OMP did improve the c++'s performance, which I measured by logging out the time. But OMP would slow down the Unity. Unity would run faster if I removed the OMP.
So, how could OMP boosted the dll and slowed down the Unity at the meantime?
Here is what the omp does:
DLLEXPORT void UpdateTreeQuick(DbvtWrapper* wrapper, Vector3* prePositions, Vector3* positions, Triangle* triangles,
int triangleCount, float margin)
{
bool needPropagate = false;
double d1 = omp_get_wtime();
#pragma omp parallel for schedule(static)
for (int i = 0; i < triangleCount; i++)
{
Vector3 sixPos[6];
sixPos[0] = prePositions[triangles[i].A];
sixPos[1] = prePositions[triangles[i].B];
sixPos[2] = prePositions[triangles[i].C];
sixPos[3] = positions[triangles[i].A];
sixPos[4] = positions[triangles[i].B];
sixPos[5] = positions[triangles[i].C];
DbvtVolume vol = DbvtVolume::FromPoints(sixPos, 6);
if (wrapper->m_dbvt->refit(wrapper->m_leaves[i], vol, margin))
needPropagate = true;
}
double d2 = omp_get_wtime();
if (triangleCount == 10222)
Debug::Log(d2 - d1);
}
Here is how I call this native code in Unity:
private void Update()
{
NativeAPI.UpdateTreeQuick(nativeDvbtWrapper, (Vector4*)nativePrePositionsWorld.GetUnsafePtr<Vector4>(),
(Vector4*)nativePositionsWorld.GetUnsafePtr<Vector4>(), (Triangle*)nativeTriangles.GetUnsafePtr<Triangle>(),
m_mesh.triangles.Length / 3, m_aabbMargin);
}
Wit OMP, 2 threads: the c++ code run with a time cost of 7-05 second, the Unity 125-130FPS;
Without OMP: c++ cost 0.0002008 seconds, BUT the Unity run at 138 FPS!
So,Again, how could OMP boosted the dll while slowed down the Unity at the meantime?
So, how could OMP boosted the dll and slowed down the Unity at the meantime?
More details would be great here, but:
In doubt, this can depend on many aspects. Besides the ones mentioned by AlexGeorg:
What are these OMP routines doing exactly? Which kinds of OMP-patterns are used? Which OpenMP version is used? Which kind of data is relevant for OpenMP in your runtime context? How "local" are the data sets you operate on?
Common OMP usage doesn't ensure a nice "main thread" discharge. Even if you strictly separate master from OMP slave work, that doesn't ensure a fluid main core behavior of your CPU a priori. It further depends on aspects like thread/CPU affinity in doubt.
Typical performance droppers for OpenMP-usage cases are cache(!) and sometimes pipeline bottlenecks. Especially if there is a lot of interference with parts of Unity, this might arise.
Maybe this has nothing to do with the FPS problem but it might be questionable that you forward pointers (the vectors) to your OMP loop. In doubt this can lead to hidden bottlenecks or even harder problems if not analyzed well enough since you hide the shared state of actual values a bit.
What's this refit method doing and is it a static/ const method? I'm not that familiar with Unity. Is there a chance for blocking GPU calls (CUDA)?
What you could try further is to measure
general OpenMP thread pool creation time in the Unity working context (you could use a quite easier task for that). How often is your routine called?
You could further look for main thread issues in removing the master thread (id 0) from the work.
If nothing helps, try to compare to another parallelization approach via std::thread simply or intel threading building blocks.

How can I improve the perfomance of my OpenMP code?

I am currently trying to improve parallel performance on my Code and I am still new to OpenMP. I have to iterate over a large container, in each iteration reading from multiple entries and writing a result to a single entry. Below is a very minmal Code example of what I am trying to do.
data is a pointer to an array, where a lot of datapoints are stored. Before the parallel region I create an Array newData, so can use data as read-only and newData as write-only, afterwards I throw the old data away and use newDatafor further calculations.
To my understanding data and newDataare shared between threads and everything declared inside the parallel region is private.
Can reading from databy multiple threads cause performance issues?
I am using #critical for assigning a new value to an element of newData to avoid race conditions. Is this necessary, since I access every element of newDataonly once and never by multiple threads?
Also I am not sure about scheduling. Do I have to specify if I want a static or dynamic schedule? Can I use nowait since all threads are idependent of each other?
array *newData = new array;
omp_set_num_threads (threads);
#pragma omp parallel
{
#pragma omp for
for (int i = 0; i < range; i++)
{
double middle = (*data)[i];
double previous = (*data)[i-1];
double next = (*data)[i+1];
double new_value = (previous + middle + next) / 3.0;
#pragma omp critical(assignment)
(*newData)[i] = new_value;
}
}
delete data;
data = newData;
I am aware that in the first and last iteration previous and next can not be read from data, in the real code this is taken care of but for this minimal example you get the idea of reading multiple times from data.
First of all, get rid of all unnecessary dependencies. #pragma omp critical(assignment) is not necessary because each index of (*newData) is only written to once per loop, so there's no race condition.
Your code could now look like this:
#pragma omp parallel for
for (int i = 0; i < range; i++)
(*newData)[i] = ((*data)[i-1] + (*data)[i] + (*data)[i+1]) / 3.0;
Now we're looking for bottlenecks. The list of potential candidates I came up with is this:
Slow division
Cache thrashing
ILP (Instruction level parallelism)
Memory bandwith limitations
Hidden dependencies
So let's analyze them further.
Slow division:
It takes some CPUs forever to calculate double/double. To know how long and what througput your CPU has, you have to look at its specs. Maybe replacing /3.0 with *0.3333.. might help, but maybe your compiler does this already. Using extended instruction sets (like SSE/AVX) you might shedule several divisions/multiplications at once.
Cache thrashing:
Because your CPU has to load/store one cache line at a time there could be conflicts. Imagine if thread 1 tries to write to (*newdata)[1] and thread 2 to (*newdata)[2] and they are on the same cache line. Now one of them has to wait for the other. You could resolve this with #pragma omp parallel for schedule(static, 64).
ILP:
CPUs can schedule multiple operations into a pipeline if the operations are independent. For this to happen you have to unroll your loop. This could look like this:
assert(range % 4 == 0);
#pragma omp parallel for
for (int i = 0; i < range/4; i++) {
(*newData)[i*4+0] = ((*data)[i*4-1] + (*data)[i*4+0] + (*data)[i*4+1]) / 3.0;
(*newData)[i*4+1] = ((*data)[i*4+0] + (*data)[i*4+1] + (*data)[i*4+2]) / 3.0;
(*newData)[i*4+2] = ((*data)[i*4+1] + (*data)[i*4+2] + (*data)[i*4+3]) / 3.0;
(*newData)[i*4+3] = ((*data)[i*4+2] + (*data)[i*4+3] + (*data)[i*4+4]) / 3.0;
}
Memory bandwith limitations:
For your very simple loop think about this. How much memory do you have to load and how long will your CPU be busy processing it. You're loading about 1 cache line and computing some dereferences, some pointer addition, two additions and one division. Which limit you hit depends on your CPU specs.
Now consider cache locality. Can you modify your code to make better use of the cache? If one thread gets i=3 in one loop-iteration, and i=7 in the next, you have to reload 3 (*data)'s. But if you would go from i=3 to i=4, you might not have to load anything, because (*data)[i+1] was in the cacheline previously loaded. You save some RAM bandwith. To make use of this, unroll the loop. Also using float instead of double increases this chance.
Hidden dependencies:
Now this part I personally find very tricky. Sometimes your compiler isn't shure it can reuse some data, because it doesn't know it hasn't changed. Using const helps the compiler. But sometimes you need a restrict to give the compiler the right hint. But I don't understand this well enough to explain it.
So here is what I would try:
const double ONETHIRD = 1.0 / 3.0;
assert(range % 4 == 0);
#pragma omp parallel for schedule(static, 1024)
for (int i = 0; i < range/4; i++) {
(*newData)[i*4+0] = ((*data)[i*4-1] + (*data)[i*4+0] + (*data)[i*4+1]) * ONETHIRD;
(*newData)[i*4+1] = ((*data)[i*4+0] + (*data)[i*4+1] + (*data)[i*4+2]) * ONETHIRD;
(*newData)[i*4+2] = ((*data)[i*4+1] + (*data)[i*4+2] + (*data)[i*4+3]) * ONETHIRD;
(*newData)[i*4+3] = ((*data)[i*4+2] + (*data)[i*4+3] + (*data)[i*4+4]) * ONETHIRD;
}
And then benchmark. Benchmark some more, and benchmark some more. Only benchmarks will show you which tricks help.
PS: One more thing to consider. If you see your program hitting the memory bandwith hard. You could consider changing the algorithm. Maybe fuse two steps into one. Like going from
b[i] := (a[i-1] + a[i] + a[i+1]) / 3.0
to
d[i] := (n[i-1] + n[i] + n[i+1]) / 3.0 = (a[i-2] + 2.0 * a[i-1] + 3.0 * a[i] + 2.0 * a[i+1] + a[i+1]) / 3.0. I think the reason for this you will find out yourself.
Have fun optimizing ;-)
Reading an array by multiple threads usually does no harm.
You only need a critical section if multiple threads work on the exact same piece of data, here each thread accesses a different part of the array so you dont need it. Critical sections are very bad for performance so only use them if absolutely necessary. Often they can be replaced by atomic actions:
openMP, atomic vs critical?
Like a critical section, they dont make sense if each thread accesses different data.
For the scheduler its best to test them each and measure the performance as predictions about performance are often wrong. Also try different chunk sizes.
Some other things that might help:
Measuring performance is often interferred by other tasks on your pc so take multiple measurements and take their minimum (except if the input is different each time, then take the average and do more measurements).
Do you really need double precision? Floats are a lot faster.
edit: nowait is for multiple independent for loops: https://msdn.microsoft.com/en-us/library/ek5st0e3.aspx
I assume you are trying to do some kind of convolution or median blur with 1D array. The short answer is: stick to default schedule strategy, and get rid of critical at all.
As I can tell, you are a quit newbie to parallelism, it's a little bit confusion to deal with OpenMP directives, like nowait/private/reduction/critical/atomic/single, etc. I think what you need is a well written textbook to clarify various concept. If you had a sound knowledge, a hour of learning OpenMP could be enough to deal with most daily programming.

Openmp performance with omp_get_max_threads greater than number of cores

I am novice is parallel programming. I running a my own Gibbs sampler written in C++. The overview of program look some thing like this.
for(int iter=0; iter <=itermax; iter++){ //loop1
#pragma omp parallel for schedule(dynamic)
for(int jobs= 0; jobs<=1000; jobs++){ // loop2
small_job();
#pragma omp critical(dataupdate){
data_updates()
}
}
jobs_that_cannot_be_parallelized();
}
I am running in a machine with 64 cores. Since small_job are of variable length and small I was assigning omp_get_max_threads = 128. The number of cores used seems to be correct (see fig load last hour).. Each of peaks belongs to loop2.
However when I look to the actual cpu usage (see fig it seems lot of of cpu is used by system and only 20% is used by user. Is it because I am spawning lots of threads at loop2. What are best practices to decide on omp_get_max_threads? I know I have not given enough information but I will really appreciate any other recommendation to make the program faster.

Make g++ produce a program that can use multiple cores?

I have a c++ program with multiple For loops; each one runs about 5 million iterations. Is there any command I can use with g++ to make the resulting .exe will use multiple cores; i.e. make the first For loop run on the first core and the second For loop run on the second core at the same time? I've tried -O3 and -O3 -ftree-vectorize, but in both cases, my cpu usage still only hovers at around 25%.
EDIT:
Here is my code, in case in helps. I'm basically just making a program to test the speed capabilities of my computer.
#include <iostream>
using namespace std;
#include <math.h>
int main()
{
float *bob = new float[50102133];
float *jim = new float[50102133];
float *joe = new float[50102133];
int i,j,k,l;
//cout << "Starting test...";
for (i=0;i<50102133;i++)
bob[i] = sin(i);
for (j=0;j<50102133;j++)
bob[j] = sin(j*j);
for (k=0;k<50102133;k++)
bob[k] = sin(sqrt(k));
for (l=0;l<50102133;l++)
bob[l] = cos(l*l);
cout << "finished test.";
cout << "the 100120 element is," << bob[1001200];
return 0;
}
The most obviously choice would be to use OpenMP. Assuming your loop is one that's really easy to execute multiple iterations in parallel, you might be able to just add:
#pragma openmp parallel for
...immediately before the loop, and get it to execute in parallel. You'll also have to add -fopenmp when you compile.
Depending on the content of the loop, that may give anywhere from a nearly-linear speedup to slowing the code down somewhat. In the latter cases (slowdown or minimal speedup) there may be other things you can do with OpenMP to help speed it up, but without knowing at least a little about the code itself, it's hard to guess what to do or what improvement you may be able to expect at maximum.
The other advice you're getting ("Use threads") may be suitable. OpenMP is basically an automated way of putting threads to use for specific types of parallel code. For a situation such as you describe (executing multiple iterations of a loop in parallel) OpenMP is generally preferred--it's quite a bit simpler to implement, and may well give better performance unless you know multithreading quite well and/or expend a great deal of effort on parallelizing the code.
Edit:
The code you gave in the question probably won't benefit from multiple threads. The problem is that it does very little computation on each data item before writing the result out to memory. Even a single core can probably do the computation fast enough that the overall speed will be limited by the bandwidth to memory.
To stand a decent chance of getting some real benefit from multiple threads, you probably want to write some code that does more computation and less just reading and writing memory. For example, if we collapse your computations together, and do all of them on a single item, then sum the results:
double total = 0;
for (int i = 0; i < size; i++)
total += sin(i) + sin(i*i) + sin(sqrt(i)) + cos(i*i);
By adding a pragma:
#pragma omp parallel for reduction(+:total)
...just before the for loop, we stand a good chance of seeing a substantial improvement in execution speed. Without OpenMP, I get a time like this:
Real 16.0399
User 15.9589
Sys 0.0156001
...but with the #pragma and OpenMP enabled when I compile, I get a time like this:
Real 8.96051
User 17.5033
Sys 0.0468003
So, on my (dual core) processor, time has dropped from 16 to 9 seconds--not quite twice as fast, but pretty close. Of course, a lot of the improvement you get will depend on exactly how many cores you have available. For example, on my other computer (with an Intel i7 CPU), I get a rather larger improvement because it has more cores.
Without OpenMP:
Real 15.339
User 15.3281
Sys 0.015625
...and with OpenMP:
Real 3.09105
User 23.7813
Sys 0.171875
For completeness, here's the final code I used:
#include <math.h>
#include <iostream>
static const int size = 1024 * 1024 * 128;
int main(){
double total = 0;
#pragma omp parallel for reduction(+:total)
for (int i = 0; i < size; i++)
total += sin(i) + sin(i*i) + sin(sqrt(i)) + cos(i*i);
std::cout << total << "\n";
}
The compiler has no way to tell if your code inside the loop can be safely executed on multiple cores. If you want to use all your cores, use threads.
Use Threads or Processes, you may want to look to OpenMp
C++11 got support for threading but c++ compilers won't/can't do any threading on their own.
As others have pointed out, you can manually use threads to achieve this. You might look at libraries such as libdispatch (aka. GCD) or Intel's TBB to help you do this with the least pain.
The -ftree-vectorize option you mention is for targeting SIMD vector processor units on CPUs such as ARM's NEON or Intel's SSE. The code produced is not thread-parallel, but rather operation parallel using a single thread.
The code example posted above is highly amenable to parallelism on SIMD systems as the body of each loop very obviously has no dependancies on the previous iteration, and the operations in the loop are linear.
On some ARM Cortex A series systems at least, you may need to accept slightly reduced accuracy to get the full benefits.

Is it possible to turn off any autoparallelization with gcc?

I am working on parallel algorithms using OpenMP. Judging from the CPU usage, much of the "sequential" code I write is actually executed in parallel.
For example:
#pragma omp parallel for if (par == "parallel")
for (int64_t u = 1; u <= n; ++u) {
for (int64_t v = u + 1; v <= n; ++v) {
....
}
}
This is conditionally parallel if a flag is set. With the flag set, I see CPU usages of 1500% on a 16 core machine. With the flag not set, I still see 250% CPU usage.
I suppose this is due to some autoparallelization going on. Correct? Does GCC do this?
Since I need to compare sequential and parallel running times, I would like code not annotated with (#pragma omp parallel... etc.) to run on one CPU only. Can I achieve this easily? Is there a GCC flag by which I can switch of autoparallelization and have parallelism where I explicitly annotate with OpenMP?
Note that the OpenMP if clause exerts run-time rather than compile-time control over the concurrency. It means that while the condition inside the if clause might evaluate to false when the program is executed, which deactivates the parallel region by setting the number of threads in its team to 1, the region would still expand to several runtime calls and a separate function for its body, although this would not lead to parallel execution. The OpenMP runtime might also keep a running pool of OpenMP threads busy-waiting for tasks.
The only way to guarantee that your OpenMP code would compile as a clearly serial executable (given that you do not link to parallel libraries) is to compile with OpenMP support disabled. In your case that would mean no -fopenmp option given to GCC while the code is being compiled.