I am trying to get OpenMP task dependencies to work, to no avail.
Let's take this simplified example:
int main()
{
int x=0;
#pragma omp parallel
#pragma omp single
{
#pragma omp task depend(in:x)
{
#pragma omp critical
cout << "Read the value " << x << "\n";
}
#pragma omp task depend(out:x)
{
x = 3;
#pragma omp critical
cout << "Set the value to " << x << "\n";
}
}
return 0;
}
As far as I understand (frome OpenMP specs), the depend(in:x) tasks should only be executed after all depend(out:x) tasks have been resolved, and so the expected output is
Set the value to 3
Read the value 3
However, the tasks are instead executed in semantic order, disregarding the depend clauses, and what I get is this:
Read the value 0
Set the value to 3
I am compiling using g++-7 (SUSE Linux) 7.3.1 20180323 [gcc-7-branch revision 258812] with the -fopenmp flag. This version of the compiler should have access to OpenMP 4.5.
Is this a misunderstanding of task dependencies on my side, or is there anything else at play here?
The concept of task dependencies can be misleading.
The best way to put it is to think about them as a way to indicate how different tasks access data and not as a way to control execution order.
The order of the tasks in the source code, together with the depend clause, describe one of the possibile 4 scenarios: read after write, write after read, write after write and read after read.
In your example, you are describing a write after read case: you are telling the compiler that the second task will overwrite the variable x, but the first task takes x as an input, as indicated by depend(in:x). Therefore, the software will execute the first task before the second to prevent overwriting the initial value.
If you have a look at Intel's documentation here there's a brief example which shows how tasks order (in the source code) still plays a role in the determination of the dependency graph (and therefore, on the execution order).
Another informative page on this matter is available here.
Related
While going through a c++ tutorial book(it's in Spanish so I apologize if my translation to English is not as proper as it should be) I have come across a particular code snippet that I do not fully understand in terms of the different processes that are happening in the background. For example, in terms of multiple address spaces, how would I determine if these are all withing the context of a single process(being that multiple threads are being added over each push to the vector)? How would I determine if each thread is different from the other if they have the exact same computation being made?)
#include <iostream>
#include <vector>
#include <thread>
using namespace std;
int addthreads = 0;
void squarenum(int x) {
addthreads += x * x * x;
}
int main() {
vector<thread> septhread;
for (int i = 1; i <= 9; i++){
septhread.push_back(thread(&squarenum, i));
}
for (auto& th : septhread){
th.join();
}
cout << "Your answer = " << addthreads << endl;
system("pause");
return 0;
}
Every answer defaults to 2025, that much I understand. My basic issue is understanding the first part of my question.
By the way, the compiler required(if you are on Linux):
g++ -std=gnu++ -pthread threadExample.cpp -o threadExample
A thread is a "thread of execution" within a process, sharing the same address space, resources, etc. Depending on the operating system, hardware, etc, they may or may not run on the same CPU or CPU Thread.
A major issue with thread programming, as a result, is managing access to resources. If two threads access the same resource at the same time, Undefined Behavior can occur. If they are both reading, it may be fine, but if one is writing at the same moment the other is reading, numerous outcomes ensue. The simplest is that both threads are running on separate CPUs or cores and so the reader does not see the change made by the writer due to cache. Another is that the reader sees only a portion of the write (if it's a 64-bit value they might only see 32-bits changed).
Your code performs a read-modify-store operation, so the first thread to come along sees the value '0', calculates the result of x*x*x, adds it to 0 and stores the result.
Meanwhile the next thread comes along and does the same thing, it also sees 0 before performing its calculation, so it writes 0 + x*x*x to the value, overwriting the first thread.
These threads might not be in the order that you launched them; it's possible for thread #30 to get the first execution cycle rather than thread #1.
You may need to consider looking at std::atomic or std::mutex.
I'm trying to reduce the computation-time of my algorithm by using OpenMP parallelization (C++).
I tried simple things, but I don't quite understand how it works...
Here is my code:
int nthread = omp_get_max_threads();
#pragma omp parallel for num_threads(nthread)
for(int i=0;i<24;++i)
std::cout << omp_get_thread_num() << std::endl;
On my computer, nthread = 6. I don't understand why the output is:
0
0
0
... (24 times)
Why doesn't it give my numbers from 0 to 5 ?
If I understand it well (correct me if I'm wrong), in this code, there are 6 threads which will execute the std::cout command.
Then, why do I have only "0" as an output ?
Second thing: I would like to execute in each thread a certain part of the loop. I want to divide my loop in 6 (nthread) different parts, so that each can be executed by a different thread.
Here, I want each of my 6 threads to execute
std::cout << omp_get_thread_num() << std::endl;
4 times.
How can I do it? I tried this:
#pragma omp parallel for num_threads(nthread)
for(int i=omp_get_thread_num()*(24/nthread);i<(omp_get_thread_num()+1)*(24/nthread);++i)
std::cout << omp_get_thread_num() << std::endl;
Is it right? The output I have is:
0
0
0
0
Is it normal to have only the "0" thread and no other in the terminal?
Thank you
Only a partial answer but I couldn't stay silent on this
I tried this:
for(int i=omp_get_thread_num()*(24/nthread);i<(omp_get_thread_num()+1)*(24/nthread);++i)
std::cout << omp_get_thread_num() << std::endl;
Is it right?
No, it's not right, not right at all ! The code is doing the work of dividing iterations across threads, a better model to follow would be
for(int i=0;i<max_iters;++i)
do work depending on i
and the compiler/run-time will take care of dividing the work across threads. Each thread will get its own set of values of i to work on.
This simple pattern is only correct if each task inside the loop is independent of every other task, so no dependencies between work(i) and work(i-1) say. But at the beginning that's probably enough to get you started.
As for the rest of your question it looks as if you aren't actually running the code in parallel. I suggest replacing
int nthread = omp_get_max_threads();
#pragma omp parallel for num_threads(nthread)
with
#pragma omp parallel for
that is, leave the number of threads to the default set up. If that doesn't work, edit your question with the results of your further investigations. And have a look round at SO, I'm fairly sure that you'll find a duplicate.
RyanP, you were absolutely right, I missed the keyword openmp.
I added it and now it works well! Thanks a lot.
Also thank you High Performance Mark for your answer,
#pragma omp parallel for
was enough for what I wanted to do.
I knew that
for(int i=omp_get_thread_num()*(24/nthread);i<(omp_get_thread_num()+1)*(24/nthread);++i)
std::cout << omp_get_thread_num() << std::endl;
was wrong, but since the other things I tried didn't work, I tried insane things. Thanks for your explanation, it's clearer now.
To resolve my problem, I simply add to my CMakeList.txt the following lines:
find_package(OpenMP)
if (OPENMP_FOUND)
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
endif()
And it works well.
Thank you all
I wrote classic game "Life" with 4-sided neighbors. When I run it in debug, it says:
Consecutive version: 4.2s
Parallel version: 1.5s
Okey, it's good. But if I run it in release, it says:
Consecutive version: 0.46s
Parallel version: 1.23s
Why? I run it on the computer with 4 kernels. I run 4 threads in parallel section. Answer is correct. But somethere is leak and I don't know that place. Can anybody help me?
I try to run it in Visual Studio 2008 and 2012. The results are same. OMP is enabled in the project settings.
To repeat my problem, you can find defined constant PARALLEL and set it to 1 or 0 to enable and disable OMP correspondingly. Answer will be in the out.txt (out.txt - right answer example). The input must be in in.txt (my input - in.txt). There are some russian symbols, you don't need to understand them, but the first number in in.txt means number of threads to run in parallel section (it's 4 in the example).
The main part is placed in the StartSimulation function. If you run the program, you will see some russian text with running time in the console.
The program code is big enough, so I add it with file hosting - main.cpp (l2 means "lab 2" for me)
Some comments about StartSimulation function. I cuts 2D surface with cells into small rectangles. It is done by AdjustKernelsParameters function.
I do not find the ratio so strange. Having multiple threads co-operate is a complex business and has overheads.
Access to shared memory needs to be serialized which normally involves some form of locking mechanism and contention between threads where they have to wait for the lock to be released.
Such shared variables need to be synchronized between the processor cores which can give significant slowdowns. Also the compiler needs to treat these critical areas differently as a "sequence point".
All this reduces the scope for per thread optimization both in the processor hardware and the compiler for each thread when it is working with the shared variable.
It seems that in this case the overheads of parallelization outweigh the optimization possibilities for the single threaded case.
If there were more work for each thread to do independently before needed to access a shared variable then these overheads would be less significant.
You are using guided loop schedule. This is a very bad choice given that you are dealing with a regular problem where each task can easily do exactly the same amount of work as any other if the domain is simply divided into chunks of equal size.
Replace schedule(guided) with schedule(static). Also employ sum reduction over livingCount instead of using locked increments:
#if PARALLEL == 1
#pragma omp parallel for schedule(static) num_threads(kernelsCount) \
reduction(+:livingCount)
#endif
for (int offsetI = 0; offsetI < n; offsetI += kernelPartSizeN)
{
for (int offsetJ = 0; offsetJ < m; offsetJ += kernelPartSizeM)
{
int boundsN = min(kernelPartSizeN, n - offsetI),
boundsM = min(kernelPartSizeM, m - offsetJ);
for (int kernelOffsetI = 0; kernelOffsetI < boundsN; ++kernelOffsetI)
{
for (int kernelOffsetJ = 0; kernelOffsetJ < boundsM; ++kernelOffsetJ)
{
if(BirthCell(offsetI + kernelOffsetI, offsetJ + kernelOffsetJ))
{
++livingCount;
}
}
}
}
}
I am trying to parallelise part of a C++ program using OpenMP, in QtCreator in Linux on VirtualBox. The host system has 4core cpu. Since my initial attempts at using openmp pragmas didn't seem to work (the code with openmp took almost the same time as that without), I went back to OpenMP wiki and tried to run this simple example.
int main(void)
{
#pragma omp parallel
printf("Hello, world.\n");
return 0;
}
and the output is just
'Hello, world'.
I also tried to run this piece of code
int main () {
int thread_number;
#pragma omp parallel private(thread_number)
{
#pragma omp for schedule(static) nowait
for (int i = 0; i < 50; i++) {
thread_number = omp_get_thread_num();
cout << "Thread " << thread_number << " says " << i << endl;
}
}
return 0;
}
and the output is:
Thread 0 says 0
Thread 0 says 1
Thread 0 says 2
.
.
.
.
Thread 0 says 49
So it looks like there is no parallelising happening after all. I have set QMAKE_CXXFLAGS+= -fopenmp
QMAKE_LFLAGS += -fopenmp in the .pro file. Is this happening because I am running it from a virtual machine? How do I make multithreading work here? I would really appreciate any suggestions/pointers. Thank you.
Your problem is that VirtualBox always defaults to a machine with one core. Go to Settings/System/Processor and increase the number of CPUs to the number of hardware threads (4 in your case or eight if you have hyperthreading). If you have hyperthreading VirtualBox will warn you that you chose more CPUs than physical CPUs. Ignore the warning.
I set my CPUs to eight. When I use OpenMP in GCC on Windows I get eight threads.
Edit: According to VirtualBox's manaual you should set the number of threads to the number of physical cores not the number of hyper-threads.
You should not, however, configure virtual machines to use more CPU cores than you have available physically (real cores, no hyperthreads).
Try setting the environment variable OMP_NUM_THREADS. The default may be 1 if your virtual machine says it has a single core (this was happening to me).
I am working on parallel algorithms using OpenMP. Judging from the CPU usage, much of the "sequential" code I write is actually executed in parallel.
For example:
#pragma omp parallel for if (par == "parallel")
for (int64_t u = 1; u <= n; ++u) {
for (int64_t v = u + 1; v <= n; ++v) {
....
}
}
This is conditionally parallel if a flag is set. With the flag set, I see CPU usages of 1500% on a 16 core machine. With the flag not set, I still see 250% CPU usage.
I suppose this is due to some autoparallelization going on. Correct? Does GCC do this?
Since I need to compare sequential and parallel running times, I would like code not annotated with (#pragma omp parallel... etc.) to run on one CPU only. Can I achieve this easily? Is there a GCC flag by which I can switch of autoparallelization and have parallelism where I explicitly annotate with OpenMP?
Note that the OpenMP if clause exerts run-time rather than compile-time control over the concurrency. It means that while the condition inside the if clause might evaluate to false when the program is executed, which deactivates the parallel region by setting the number of threads in its team to 1, the region would still expand to several runtime calls and a separate function for its body, although this would not lead to parallel execution. The OpenMP runtime might also keep a running pool of OpenMP threads busy-waiting for tasks.
The only way to guarantee that your OpenMP code would compile as a clearly serial executable (given that you do not link to parallel libraries) is to compile with OpenMP support disabled. In your case that would mean no -fopenmp option given to GCC while the code is being compiled.