omp doens't launch thread - c++

openMp used to work on my project on 6 threads and now, (I have no ideas why), the program is single threaded.
My code is pretty simple, I only use openMp in one cpp file, i declared
#include <omp.h>
then the function to parallelize is :
#pragma omp parallel for collapse(2) num_threads(IntervalMapEstimator::m_num_thread)
for (int cell_index_x = m_min_cell_index_sensor_rot_x; cell_index_x <= m_max_cell_index_sensor_rot_x; cell_index_x++)
{
for (int cell_index_y = m_min_cell_index_sensor_rot_y; cell_index_y <= m_max_cell_index_sensor_rot_y; cell_index_y++)
{
//use for debug
omp_set_num_threads (5);
std::cout << "omp_get_num_threads = " << omp_get_num_threads ()<< std::endl;
std::cout << "omp_get_max_threads = " << omp_get_max_threads ()<< std::endl;
if(split_points) {
extract_relevant_points_from_angle_lists(relevant_points, pointcloud_ff_polar_angle_lists, cell_min_angle_sensor_rot, cell_max_angle_sensor_rot);
} else {
extract_relevant_points_multithread_with_localvector(relevant_points, pointcloud, cell_min_angle_sensor_rot, cell_max_angle_sensor_rot);
}
}
}
omp_get_num_threads return 1 thread
omp_get_max_threads return 5
IntervalMapEstimator::m_num_thread is set at 6
Any lead would be greatly appreciated.
EDIT 1:
I modified my code but the problem remains, the program is still running in mono thread.
omp_get_num_threads return 1
omp_get_max_threads return 8
Is there a way to know how many threads are available at running time ?
#pragma omp parallel for collapse(2)
for (int cell_index_x = m_min_cell_index_sensor_rot_x; cell_index_x <= m_max_cell_index_sensor_rot_x; cell_index_x++)
{
for (int cell_index_y = m_min_cell_index_sensor_rot_y; cell_index_y <= m_max_cell_index_sensor_rot_y; cell_index_y++)
{
std::cout << "omp_get_num_threads = " << omp_get_num_threads ()<< std::endl;
std::cout << "omp_get_max_threads = " << omp_get_max_threads ()<< std::endl;
extract_relevant_points(relevant_points, pointcloud, cell_min_angle_sensor_rot, cell_max_angle_sensor_rot);
}
}
I just saw my computer is beginning to run low in memory, could that be a part of the problem ?

According to https://msdn.microsoft.com/en-us/library/bx15e8hb.aspx:
If a parallel region is encountered while dynamic adjustment of the number of threads is disabled, and the number of threads requested for the parallel region exceeds the number that the run-time system can supply, the behavior of the program is implementation-defined. An implementation may, for example, interrupt the execution of the program, or it may serialize the parallel region.
You request 6 threads, the implementation can only provide 5, so it is free to do what it wants.
I'm also pretty sure you are not supposed to change the number of threads while inside a parallel region, so your omp_set_num_threads will do nothing at best and blow up in your face at worst.

I founded the answer thanks to another post: Why does the compiler ignore OpenMP pragmas?
In the end it was a simple error of library that i didn't add to the compiler, and I didn't noticed it because i was compiling with cmake so i don't have to type the line directly. Also i use catkin_make to compile so i don't have warning but only error in the console.
So basically, to use openMp you have to add -fopenmp as arguments to your compiler, and if you don't' ... well the lines are just ignored by the compiler.

Related

OpenMP integer copied after tasks finish

I do not know if this is documented anywhere, if so I would love a reference to it, however I have found some unexpected behaviour when using OpenMP. I have a simple program below to illustrate the issue. Here in point form I will tell what I expect the program to do:
I want to have 2 threads
They both share an integer
The first thread increments the integer
The second thread reads the integer
Ater incrementing once, an external process must tell the first thread to continue incrementing (via a mutex lock)
The second thread is in charge of unlocking this mutex
As you will see, the counter which is shared between the threads is not altered properly for the second thread. However, if I turn the counter into an integer refernce instead, I get the expected result. Here is a simple code example:
#include <mutex>
#include <thread>
#include <chrono>
#include <iostream>
#include <omp.h>
using namespace std;
using std::this_thread::sleep_for;
using std::chrono::milliseconds;
const int sleep_amount = 2000;
int main() {
int counter = 0; // if I comment this and uncomment the 2 lines below, I get the expected results
/* int c = 0; */
/* int &counter = c; */
omp_lock_t mut;
omp_init_lock(&mut);
int counter_1, counter_2;
#pragma omp parallel
#pragma omp single
{
#pragma omp task default(shared)
// The first task just increments the counter 3 times
{
while (counter < 3) {
omp_set_lock(&mut);
counter += 1;
cout << "increasing: " << counter << endl;
}
}
#pragma omp task default(shared)
{
sleep_for(milliseconds(sleep_amount));
// While sleeping, counter is increased to 1 in the first task
counter_1 = counter;
cout << "counter_1: " << counter << endl;
omp_unset_lock(&mut);
sleep_for(milliseconds(sleep_amount));
// While sleeping, counter is increased to 2 in the first task
counter_2 = counter;
cout << "counter_2: " << counter << endl;
omp_unset_lock(&mut);
// Release one last time to increment the counter to 3
}
}
omp_destroy_lock(&mut);
cout << "expected: 1, actual: " << counter_1 << endl;
cout << "expected: 2, actual: " << counter_2 << endl;
cout << "expected: 3, actual: " << counter << endl;
}
Here is my output:
increasing: 1
counter_1: 0
increasing: 2
counter_2: 0
increasing: 3
expected: 1, actual: 0
expected: 2, actual: 0
expected: 3, actual: 3
gcc version: 9.4.0
Additional discoveries:
If I use OpenMP 'sections' instead of 'tasks', I get the expected result as well. The problem seems to be with 'tasks' specifically
If I use posix semaphores, this problem also persists.
This is not permitted to unlock a mutex from another thread. Doing it causes an undefined behavior. The general solution is to use semaphores in this case. Wait conditions can also help (regarding the real-world use cases). To quote the OpenMP documentation (note that this constraint is shared by nearly all mutex implementation including pthreads):
A program that accesses a lock that is not in the locked state or that is not owned by the task that contains the call through either routine is non-conforming.
A program that accesses a lock that is not in the uninitialized state through either routine is non-conforming.
Moreover, the two tasks can be executed on the same thread or different threads. You should not assume anything about their scheduling unless you tell OpenMP to do so with dependencies. Here, it is completely compliant for a runtime to execute the tasks serially. You need to use OpenMP sections so multiple threads execute different sections. Besides, it is generally considered as a bad practice to use locks in tasks as the runtime scheduler is not aware of them.
Finally, you do not need a lock in this case: an atomic operation is sufficient. Fortunately, OpenMP supports atomic operations (as well as C++).
Additional notes
Note that locks guarantee the consistency of memory accesses in multiple threads thanks to memory barriers. Indeed, an unlock operation on a mutex cause a release memory barrier that make writes visible from others threads. A lock from another thread do an acquire memory barrier that force reads to be done after the lock. When lock/unlocks are not used correctly, the way memory accesses are done is not safe anymore and this cause some variable not to be updated from other threads for example. More generally, this also tends to create race conditions. Thus, put it shortly, don't do that.

Wait for the whole omp block to be finished before calling the second function

I don't have experience with openmp in C++ and I would like to learn how to solve my problem properly. I have 30 files that need to be processed independently by the same function. Each time the function is activated, a new output file will be generated (out01.txt to out30.txt) saving the results. I have 12 processors in my machine and would like to use 10 for this problem.
I need to change my code to wait for all the 30 files to be processed to execute other routines in C++. At this moment, I'm not able to force my code to wait for all the omp scope to be executed and then move the second function.
Please find below a draft of my code.
int W = 10;
int i = 1;
ostringstream fileName;
int th_id, nthreads;
omp_set_num_threads(W);
#pragma omp parallel shared (nFiles) private(i,fileName,th_id)
{
#pragma omp for schedule(static)
for ( i = 1; i <= nFiles; i++)
{
th_id = omp_get_thread_num();
cout << "Th_id: " << th_id << endl;
// CALCULATION IS PERFORMED HERE FOR EACH FILE
}
}
// THIS is the point where the program should wait for the whole block to be finished
// Calling the second function ...
Both the "omp for" and "omp parallel" pragmas have an implicit barrier at the end of its scope. Therefore, the code after the parallel section can't be executed until the parallel section has concluded. So your code should run perfectly.
If there is still a problem then it isn't because your code isn't waiting at the end of the parallel region.
Please supply us with more details about what happens during execution of this code. This way we might be able to find the real cause of your problem.

How to write to file from different threads, OpenMP, C++

I use openMP for parallel my C++ program. My parallel code have very simple form
#pragma omp parallel for shared(a, b, c) private(i, result)
for (i = 0; i < N; i++){
result= F(a,b,c,i)//do some calculation
cout<<i<<" "<<result<<endl;
}
If two threads try to write into file simultaneously, the data is mixed up.
How I can solve this problem?
OpenMP provides pragmas to help with synchronisation. #pragma omp critical allows only one thread to be executing the attached statement at any time (a mutual exclusion critical region). The #pragma omp ordered pragma ensures loop iteration threads enter the region in order.
// g++ -std=c++11 -Wall -Wextra -pedantic -fopenmp critical.cpp
#include <iostream>
int main()
{
#pragma omp parallel for
for (int i = 0; i < 20; ++i)
std::cout << "unsynchronized(" << i << ") ";
std::cout << std::endl;
#pragma omp parallel for
for (int i = 0; i < 20; ++i)
#pragma omp critical
std::cout << "critical(" << i << ") ";
std::cout << std::endl;
#pragma omp parallel for ordered
for (int i = 0; i < 20; ++i)
#pragma omp ordered
std::cout << "ordered(" << i << ") ";
std::cout << std::endl;
return 0;
}
Example output (different each time in general):
unsynchronized(unsynchronized(unsynchronized(05) unsynchronized() 6unsynchronized() 1unsynchronized(7) ) unsynchronized(unsynchronized(28) ) unsynchronized(unsynchronized(93) ) unsynchronized(4) 10) unsynchronized(11) unsynchronized(12) unsynchronized(15) unsynchronized(16unsynchronized() 13unsynchronized() 17) unsynchronized(unsynchronized(18) 14unsynchronized() 19)
critical(5) critical(0) critical(6) critical(15) critical(1) critical(10) critical(7) critical(16) critical(2) critical(8) critical(17) critical(3) critical(9) critical(18) critical(11) critical(4) critical(19) critical(12) critical(13) critical(14)
ordered(0) ordered(1) ordered(2) ordered(3) ordered(4) ordered(5) ordered(6) ordered(7) ordered(8) ordered(9) ordered(10) ordered(11) ordered(12) ordered(13) ordered(14) ordered(15) ordered(16) ordered(17) ordered(18) ordered(19)
Problem is: you have a single resource all threads try to access. Those single resources must be protected against concurrent access (thread safe resources do this, too, just transparently for you; by the way: here is a nice answer about thread safety of std::cout). You could now protect this single resource e. g. with a std::mutex. Problem then is, that the threads will have to wait for the mutex until the other thread gives it back again. So you only will profit from parallelisation if F is a very complex function.
Further drawback: as threads work parallel, even with a mutex to protect std::in, the results can be printed out in arbitrary order, depending on which thread happens to operate earlier.
If I may assume that you want the results of F(... i) for smaller i before the results of greater i, you either should drop parallelisation entirely or do it differently:
Provide an array of size N and let each thread store its results there (array[i] = f(i);). Then iterate over the array in a separate non-parallel loop. Again, doing so is only worth the effort if F is a complex function (and for large N).
Additionally: Be aware that threads must be created, too, which causes some overhead somewhere (creating thread infrastructure and stack, registering thread at OS, ... – unless if you can reuse some threads already created in a thread pool earlier...). Consider this, too, when deciding if you want to parallelise or not. Sometimes, non-parallel calculations can be faster...

How to make GNU GCC optimize OpenMP threads similarly

This is my first post here. Yay! Back to the problem:
I'm learning how to use OpenMP. My IDE is Code::Blocks. I want to improve some of my older programs. I need to be sure that the results will be exactly the same. It appears that "for" loops are optimized differently in the master thread than in the other threads.
Example:
#include <iostream>
#include <omp.h>
int main()
{
std::cout.precision(17);
#pragma omp parallel for schedule(static, 1) ordered
for(int i=0; i<4; i++)
{
double sum = 0.;
for(int j=0; j<10; j++)
{
sum += 10.1;
}
#pragma omp ordered
std::cout << "thread " << omp_get_thread_num() << " says " << sum << "\n";
}
return 0;
}
produces
thread 0 says 101
thread 1 says 100.99999999999998579
thread 2 says 100.99999999999998579
thread 3 says 100.99999999999998579
Can I somehow make sure all threads receive the same optimization than my single-threaded programs (that didn't use OpenMP) have received?
EDIT:
The compiler is "compiler and GDB debugger from TDM-GCC (version 4.9.2, 32 bit, SJLJ)", whatever that means. It's the IDE's "default". I'm not familiar with compiler differences.
The output provided comes from the "Release" build, which is adding the "-O2" argument.
None of "-O", "-O1" and "-O3" arguments produces a "101".
You can try my .exe from dropbox (zip file, also contains possibly required dlls).
This i happens because float or double data type can not represent some numbers like 20.2
#include <iostream>
int main()
{
std::cout.precision(17);
double a=20.2;
std::cout << a << std::endl;
return 0;
}
its output will be
20.199999999999999
for more information on this see
Unexpected Output when adding two float numbers
Don't know why this does not happens for the first thread but if you remove openMP then too you will get the same result.
From what I get this is simply numerical accuracy. For a double value type you should expect 16 digits precision.
I.e. the result is 101 +/- 1.e-16*101
This exactly the range you get. And unless you use something like quadruple precision, this is as good as it gets.

C++ OpenMP for-loop global variable problems

I'm new to OpenMP and from what I have read about OpenMP 2.0, which comes standard with Microsoft Visual Studio 2010, global variables are considered troublesome and error prone when used in parallel programming. I have also been adopting this feeling since I have found very little on how to deal with global variables and static global variables efficiently, or at all for that matter.
I have this snippet of code which runs but because of the local variable created in the parallel block I don't get the answer I'm looking for. I get 8 different print outs (because that how many threads I have on my PC) instead of 1 answer. I know that it's because of the local variables "list" created in the parallel block but this code will not run if I move the "list" variable and make it a global variable. Actually the code does run but it never gives me an answer back. This is the sample code that I would like to modify to use a global "list" variable :
#pragma omp parallel
{
vector<int> list;
#pragma omp for
for(int i = 0; i < 50000; i++)
{
list.push_back(i);
}
cout << list.size() << endl;
}
Output:
6250
6250
6250
6250
6250
6250
6250
6250
They add up to 50000 but I did not get the one answer with 50000, instead it's divided up.
Solution:
vector<int> list;
#pragma omp parallel
{
#pragma omp for
for(int i = 0; i < 50000; i++)
{
cout << i << endl;
#pragma omp critical
{
list.push_back(i);
}
}
}
cout << list.size() << endl;
According to the MSDN Documentation the parallel clause
Defines a parallel region, which is code that will be executed by
multiple threads in parallel.
And since the list variable is declared inside this section every thread will have its own list.
On the other hand, the for pragma
Causes the work done in a for loop inside a parallel region to be
divided among threads.
So the 50000 iterations will be split among threads but each thread will have its own list.
I think what you are trying to do can be achieved by:
Taking the list definition outside the "parallel" section.
Protect the list.push_back statement with a critical section.
Try this:
vector<int> list;
#pragma omp parallel
{
#pragma omp for
for(int i = 0; i < 50000; i++)
{
#pragma omp critical
{
list.push_back(i);
}
}
}
cout << list.size() << endl;
I don't think you should get any speedup from OpenMP in this case because there will be contention for the critical section. A faster solution for this (if you don't care about the order of elements) would be for every thread to have its own list, and get those lists merged after the loop finishes. The implementation using std::list instead of std::vector would look cleaner in this case (because you wouldn't have to copy arrays).
Some apps are memory bound and not compute bound. Bottom line: check if you actually get a speedup from OpenMP.
Why you need the first pragma here? (#pragma omp parallel). I think that's the issue.