I want to measure performance of parallel program implemented using C++(openMP)
Recommended way to measure time using this technology is
double start = omp_get_wtime();
// some code here
double end = omp_get_wtime();
printf_s("Time = %.16g", end - start);
But i get time near 0, despite i wait for program about 8 seconds.
All other methods to get execution time return 0.0
Also tried use these code examples
DWORD st = GetTickCount();
time_t time_start = time(NULL);
clock_t start = clock();
auto t1 = Clock::now();
time_t time_finish = time(NULL);
DWORD fn = GetTickCount();
clock_t finish = clock();
auto t2 = Clock::now();
All without success. Program spend running a lot of time. But results always zero. (In debug and release mode)
If i debug step-by-step results differs from zero.
Here is my parallel #pragma directive
#pragma omp parallel default(none) private(i) shared(nSum, nTheads, nMaxThreads, nStart, nEnd, data, modulo) {
#pragma omp master
nTheads = omp_get_num_threads();
nMaxThreads = omp_get_max_threads();
#pragma omp for
for (int i = nStart; i < nEnd; ++i)
{
#pragma omp atomic
nSum += (power(data[i], i) * i) % modulo;
}
}
Where is my error? Please help me. I spend a lot of time with this problem.
Related
I have written the below code to parallelize two 'for' loops.
#include <iostream>
#include <omp.h>
#define SIZE 100
int main()
{
int arr[SIZE];
int sum = 0;
int i, tid, numt, prod;
double t1, t2;
for (i = 0; i < SIZE; i++)
arr[i] = 0;
t1 = omp_get_wtime();
#pragma omp parallel private(tid, prod)
{
tid = omp_get_thread_num();
numt = omp_get_num_threads();
std::cout << "Tid: " << tid << " Thread: " << numt << std::endl;
#pragma omp for reduction(+: sum)
for (i = 0; i < 50; i++) {
prod = arr[i]+1;
sum += prod;
}
#pragma omp for reduction(+: sum)
for (i = 50; i < SIZE; i++) {
prod = arr[i]+1;
sum += prod;
}
}
t2 = omp_get_wtime();
std::cout << "Time taken: " << (t2 - t1) << ", Parallel sum: " << sum << std::endl;
return 0;
}
In this case the execution of 1st 'for' loop is done in parallel by all the threads and the result is accumulated in sum variable. After the execution of the 1st 'for' loop is done, threads start executing the 2nd 'for' loop in parallel and the result is accumulated in sum variable. In this case clearly the execution of the 2nd 'for' loop waits for the execution of the 1st 'for' loop to get over.
I want to do the processing of the two 'for' loop simultaneously over threads. How can I do that? Is there any other way I can write this code more efficiently. Ignore the dummy work that I am doing inside the 'for' loop.
You can declare the loops nowait and move the reduction to the end of the parallel section. Something like this:
# pragma omp parallel private(tid, prod) reduction(+: sum)
{
# pragma omp for nowait
for (i = 0; i < 50; i++) {
prod = arr[i]+1;
sum += prod;
}
# pragma omp for nowait
for (i = 50; i < SIZE; i++) {
prod = arr[i]+1;
sum += prod;
}
}
If you use #pragma omp for nowait all threads are assigned to the first loop, the second loop will only start if at least one thread finished in the first loop. Unfortunately, there is no way to tell the omp for construct to use e.g. only half of the threads.
Fortunately, there is a solution to do so (i.e. to run the 2 loops parallel) by using tasks. The following code will use half of the threads to run the first loop, the other half to run the second one using the taskloop construct and num_threads clause to control the threads assigned for a loop. This will do exactly what you intended, but you have to test which solution is faster in your case.
#pragma omp parallel
#pragma omp single
{
int n=omp_get_num_threads();
#pragma omp taskloop num_tasks(n/2)
for (int i = 0; i < 50; i++) {
//do something
}
#pragma omp taskloop num_tasks(n/2)
for (int i = 50; i < SIZE; i++) {
//do something
}
}
UPDATE: The first paragraph is not entirely correct, by changing the chunk_size you have some control how many threads will be used in the first loop. It can be done by using e.g. schedule(linear, chunk_size) clause. So, I thought setting the chunk_size will do the trick:
#pragma omp parallel
{
int n=omp_get_num_threads();
#pragma omp single
printf("num_threads=%d\n",n);
#pragma omp for schedule(static,2) nowait
for (int i = 0; i < 4; i++) {
printf("thread %d running 1st loop\n", omp_get_thread_num());
}
#pragma omp for schedule(static,2)
for (int i = 4; i < SIZE; i++) {
printf("thread %d running 2nd loop\n", omp_get_thread_num());
}
}
BUT at first the result seems surprising:
num_threads=4
thread 0 running 1st loop
thread 0 running 1st loop
thread 0 running 2nd loop
thread 0 running 2nd loop
thread 1 running 1st loop
thread 1 running 1st loop
thread 1 running 2nd loop
thread 1 running 2nd loop
What is going on? Why threads 2 and 3 not used? OpenMP run-time guarantees that if you have two separate loops with the same number of iterations and execute them with the same number of threads using static scheduling, then each thread will receive exactly the same iteration ranges in both parallel regions.
On the other hand result of using schedule(dynamic,2) clause was quite surprising - only one thread is used, CodeExplorer link is here.
I am trying to parallelise these recursive functions with openMP tasks,
when I compile with gcc it runs only on 1 thread. When i compile it with clang it runs on multiple threads
The second function calls the first one which doesn't generate new tasks to stop wasting time.
gcc does work when there is only one function that calls itself.
Why is this?
Am I doing something wrong in the code?
Then why does it work with clang?
I am using gcc 9.3 on windows with Msys2.
The code was compiled with -O3 -fopenmp
//the program compiled by gcc only runs on one thread
#include<vector>
#include<omp.h>
#include<iostream>
#include<ctime>
using namespace std;
vector<int> vec;
thread_local double steps;
void excalibur(int current_node, int current_depth) {
#pragma omp simd
for( int i = 0 ; i < current_node; i++){
++steps;
excalibur(i, current_depth);
}
if(current_depth > 0){
int new_depth = current_depth - 1;
#pragma omp simd
for(int i = current_node;i <= vec[current_node];i++){
++steps;
excalibur(i + 1,new_depth);
}
}
}
void mario( int current_node, int current_depth) {
#pragma omp task firstprivate(current_node,current_depth)
{
if(current_depth > 0){
int new_depth = current_depth - 1;
for(int i = current_node;i <= vec[current_node];i++){
++steps;
mario(i + 1,new_depth);
}
}
}
#pragma omp simd
for( int i = 0 ; i < current_node; i++){
++steps;
excalibur(i, current_depth);
}
}
int main() {
double total = 0;
clock_t tim = clock();
omp_set_dynamic(0);
int nodes = 10;
int timesteps = 3;
omp_set_num_threads(4);
vec.assign( nodes, nodes - 2 );
#pragma omp parallel
{
steps = 0;
#pragma omp single
{
mario(nodes - 1, timesteps - 1);
}
#pragma omp atomic
total += steps;
}
double time_taken = (double)(tim) / CLOCKS_PER_SEC;
cout <<fixed<<total<<" steps, "<< fixed << time_taken << " seconds"<<endl;
return 0;
}
while this works with gcc
#include<vector>
#include<omp.h>
#include<iostream>
#include<ctime>
using namespace std;
vector<int> vec;
thread_local double steps;
void mario( int current_node, int current_depth) {
#pragma omp task firstprivate(current_node,current_depth)
{
if(current_depth > 0){
int new_depth = current_depth - 1;
for(int i = current_node;i <= vec[current_node];i++){
++steps;
mario(i + 1,new_depth);
}
}
}
#pragma omp simd
for( int i = 0 ; i < current_node; i++){
++steps;
mario(i, current_depth);
}
}
int main() {
double total = 0;
clock_t tim = clock();
omp_set_dynamic(0);
int nodes = 10;
int timesteps = 3;
omp_set_num_threads(4);
vec.assign( nodes, nodes - 2 );
#pragma omp parallel
{
steps = 0;
#pragma omp single
{
mario(nodes - 1, timesteps - 1);
}
#pragma omp atomic
total += steps;
}
double time_taken = (double)(tim) / CLOCKS_PER_SEC;
cout <<fixed<<total<<" steps, "<< fixed << time_taken << " seconds"<<endl;
return 0;
}
Your program doesn't run in parallel because there is simply nothing to run in parallel. Upon first entry in mario, current_node is 9 and vec is all 8s, so this loop in the first and only task never executes:
for(int i = current_node;i <= vec[current_node];i++){
++steps;
mario(i + 1,new_depth);
}
Hence, no recursive creation of new tasks. How and what runs in parallel when you compile it with Clang is well beyond me, since when I compile it with Clang 9, the executable behaves exactly the same as the one produced by GCC.
The second code runs in parallel because of the recursive call in the loop after the task region. But it also isn't a correct OpenMP program - the specification forbids nesting task regions inside a simd construct (see under Restrictions here):
The only OpenMP constructs that can be encountered during execution of a simd region are the atomic construct, the loop construct, the simd construct and the ordered construct with the simd clause.
None of the two compilers catches that problem when the nesting is in the dynamic and not in the lexical scope of the simd construct though.
Edit: I actually looked it a bit closer into it and I may have a suspicion about what might have caused your confusion. I guess you determine if your program works in parallel or not by looking at the CPU utilisation while it runs. This often leads to confusion. The Intel OpenMP runtime that Clang uses has a very aggressive waiting policy. When the parallel region in the main() function spawns a team of four threads, one of them goes executing mario() and the other three hit the implicit barrier at the end of the region. There they spin, waiting for new tasks to be eventually assigned to them. They never get one, but keep on spinning anyway, and that's what you see in the CPU utilisation. If you want to replicate the same with GCC, set OMP_WAIT_POLICY to ACTIVE and you'll see the CPU usage soar while the program runs. Still, if you profile the program's execution, you'll see that CPU time is spent inside your code in one thread only.
This question already has answers here:
OpenMP time and clock() give two different results
(3 answers)
Closed 3 years ago.
I have to add two vectors and compare serial performance against parallel performance.
However, my parallel code seems to take longer to execute than the serial code.
Could you please suggest changes to make the parallel code faster?
#include <iostream>
#include <time.h>
#include "omp.h"
#define ull unsigned long long
using namespace std;
void parallelAddition (ull N, const double *A, const double *B, double *C)
{
ull i;
#pragma omp parallel for shared (A,B,C,N) private(i) schedule(static)
for (i = 0; i < N; ++i)
{
C[i] = A[i] + B[i];
}
}
int main(){
ull n = 100000000;
double* A = new double[n];
double* B = new double[n];
double* C = new double[n];
double time_spent = 0.0;
for(ull i = 0; i<n; i++)
{
A[i] = 1;
B[i] = 1;
}
//PARALLEL
clock_t begin = clock();
parallelAddition(n, &A[0], &B[0], &C[0]);
clock_t end = clock();
time_spent += (double)(end - begin) / CLOCKS_PER_SEC;
cout<<"time elapsed in parallel : "<<time_spent<<endl;
//SERIAL
time_spent = 0.0;
for(ull i = 0; i<n; i++)
{
A[i] = 1;
B[i] = 1;
}
begin = clock();
for (ull i = 0; i < n; ++i)
{
C[i] = A[i] + B[i];
}
end = clock();
time_spent += (double)(end - begin) / CLOCKS_PER_SEC;
cout<<"time elapsed in serial : "<<time_spent;
return 0;
}
These are results:
time elapsed in parallel : 0.824808
time elapsed in serial : 0.351246
I've read on another thread that there are factors like spawning of threads, allocation of resources. But I don't know what to do to get the expected result.
EDIT:
Thanks! #zulan and #Daniel Langr 's answers actually helped!
I used omp_get_wtime() instead of clock().
It happens to be that clock() measures cumulative time of all threads as against omp_get_wtime() which can be used to measure the time elasped from an arbitrary point to some other arbitrary point
This answer too answers this query pretty well: https://stackoverflow.com/a/10874371/4305675
Here's the fixed code:
void parallelAddition (ull N, const double *A, const double *B, double *C)
{
....
}
int main(){
....
//PARALLEL
double begin = omp_get_wtime();
parallelAddition(n, &A[0], &B[0], &C[0]);
double end = omp_get_wtime();
time_spent += (double)(end - begin);
cout<<"time elapsed in parallel : "<<time_spent<<endl;
....
//SERIAL
begin = omp_get_wtime();
for (ull i = 0; i < n; ++i)
{
C[i] = A[i] + B[i];
}
end = omp_get_wtime();
time_spent += (double)(end - begin);
cout<<"time elapsed in serial : "<<time_spent;
return 0;
}
RESULT AFTER CHANGES:
time elapsed in parallel : 0.204763
time elapsed in serial : 0.351711
There are multiple factors that influence your measurements:
Use omp_get_wtime() as #zulan suggested, otherwise, you may actually calculate combined CPU time, instead of wall time.
Threading has some overhead and typically does not pay off for short calculations. You may want to use higher n.
"Touch" data in C array before running parallelAddition. Otherwise, the memory pages are actually allocated from OS inside parallelAddition. Easy fix since C++11: double* C = new double[n]{};.
I tried your program for n being 1G and the last change reduced runtime of parallelAddition from 1.54 to 0.94 [s] for 2 threads. Serial version took 1.83 [s], therefore, the speedup with 2 threads was 1.95, which was pretty close to ideal.
Other considerations:
Generally, if you profile something, make sure that the program has some observable effect. Otherwise, a compiler may optimize a lot of code away. Your array addition has no observable effect.
Add some form of restrict keyword to the C parameter. Without it, a compiler might not be able to apply vectorization.
If you are on a multi-socket system, take care about affinity of threads and NUMA effects. On my dual-socket system, runtime of a parallel version for 2 threads took 0.94 [s] (as mentioned above) when restricting threads to a single NUMA node (numactl -N 0 -m 0). Without numactl, it took 1.35 [s], thus 1.44 times more.
I am trying to parallelize this: c[i]=a[i]+b[i]
By using C program I am getting:
Elapsed time = 1667417 nanoseconds
with OpenMP I get:
Elapsed time = 8673966 nanoseconds
I don't clearly understand why this is happening and what needs to be done to parallelize this code. I am assuming that it is very simple addition so probably parallelism is not getting exploited here but would like to know the correct reason and any other way by which I could effectively parallelize this addition. I also tried using dynamic, guided and various chunksizes but it gives more or less similar results.
#define N 100
int main (int argc, char *argv[])
{
int i;
float a[N], b[N], c[N];
uint64_t diff; /* Elapsed time */
struct timespec start, end;
/* Some initializations */
#pragma omp parallel for schedule(static,10) num_threads(4)
for (i=0; i < N; i++){
a[i] = b[i] = i * 1.0;
}
/*add two arrays*/
clock_gettime(CLOCK_MONOTONIC, &start); /* mark start time */
#pragma omp parallel for schedule(static) num_threads(4)
for (i=0; i<N; i++){
c[i] = a[i] + b[i];
printf("Thread number:%d,c[%d]= %f\n", omp_get_thread_num(),i,c[i]);
}
clock_gettime(CLOCK_MONOTONIC, &end); /* mark the end time */
diff = BILLION * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
printf("\nElapsed time = %llu nanoseconds\n", (long long unsigned int) diff);
}
I am writing my first OpenMP project. This is my work:
myFooFunction{
int64_t Gm = 0;
double* dist = (double*)middleManDouble;
int64_t LengthofData = Frames * Height * Width;
mexEvalString("tic");
if (BitDepth == 10){
const unsigned __int16* src__int16 = (unsigned __int16*)middleMan;
//#pragma omp parallel
//#pragma omp for
#pragma omp parallel for
for (Gm = 0; Gm < LengthofData; ++Gm){
dist[Gm] = (double)(src__int16[Gm]);
}
}
else if (BitDepth == 8){
const unsigned __int8* src__int8 = (unsigned __int8*)middleMan;
//#pragma omp parallel
// #pragma omp for
#pragma omp parallel for
for (Gm = 0; Gm < LengthofData; ++Gm){
dist[Gm] = (double)(src__int8[Gm]);
}
}
mexEvalString("toc");
}
But I don't see improve in executaion time of for loop despite the fact that my CPU cores utilizations all are upper than 95%. What is wrong with my code?
Am I using OpenMp in correct way? I just want to execute the for loop on multi thread.