How can be OpenMP used with the class and Eigen? - c++

I am trying to make code using OpenMP, some class and Eigen.
The overall code is at the bottom.
The code is a simplified version of what I want to do in my project.
The simplified code is just a matrix inverse and multiplication using class and OpenMP.
If I don't use the OpenMP, I get the results of
solution in for loop at 0th iteration is... 1 1 1
solution in class at 0th iteration is... 1 1 1
solution in for loop at 1th iteration is... 4 4 4
solution in class at 1th iteration is... 4 4 4
solution in for loop at 2th iteration is... 9 9 9
solution in class at 2th iteration is... 9 9 9
solution in for loop at 3th iteration is... 16 16 16
solution in class at 3th iteration is... 16 16 16
However, If I use the OpenMP, I get the results of
9 11 91
1
16 16
solution in class at 3th iteration is... 116 16 161
1 1
solution in class at 2th iteration is... 1
9 9 9
solution in for loop at 1th iteration is... solution in for loop at 9
13th iteration is... 1
solution in class at 0th iteration is... 4 4 4 16 solution in for loop at 1
solution in for loop at 01 11
1 solution in for loop at 1 solution in for loop at 41 3th iteration is... th iteration is...
16 9 solution in for loop at th iteration is... 1 solution in class at 33th iteration is... th iteration is... 161
16 solution in for loop at 016 16 2th iteration is... 1616 1616 16 161616th iteration is...
solution in for loop at 19 solution in class at 3th iteration is... 9 9
16
The results are corrupted.
Why does this happen?
Also, at first, I found that only using "#pragma omp parallel for" results in the error that occurs when LDLT is not initialized.
So, I added the following instead of "#pragma omp parallel for".
#pragma omp parallel for private(j,k) firstprivate(foo)
class Foo {
private:
std::vector<Eigen::Matrix<double, 3, 3>> a;
Eigen::LDLT<Eigen::MatrixXd> Minv; // Minv
std::vector<Eigen::Matrix<double, 3, 1>> b;
std::vector<Eigen::Matrix<double, 3, 1>> solution;
public:
Foo() {};
~Foo() {};
void Initialization() {
a.resize(4);
b.resize(4);
solution.resize(4);
for (int i = 0; i < 4; i++) {
a.at(i).setZero();
b.at(i).setZero();
solution.at(i).setZero();
}
}
void SetInv(int idx) {
a.at(idx) = (1.0/(idx+1.0))*Eigen::Matrix3d::Identity();
b.at(idx) = (idx+1)*Eigen::Vector3d::Ones();
Minv.compute(a.at(idx));
}
void Calculation(int idx) {
solution.at(idx) = Minv.solve(b.at(idx));
}
Eigen::Matrix<double, 3, 1> &ReturnSolution(int idx) {
return solution.at(idx);
}
void ShowSolution(int idx) {
std::cout << " solution in class at " << idx << "th iteration is... "<< solution.at(idx).transpose() << std::endl;
}
};
I hope to know what the corrupted result is made.

First of all, the results are corrupted, because using OpenMP you made a multithreaded code. There are several threads doing cout at the same time - therefore you can see the mess.
if you want to have clear output, you need implement a mutex to make your code thread safe. Simplified example:
#include <mutex>
std::mutex mtx;
#pragma omp parallel for private(j,k) firstprivate(foo)
{
/* Your work happens here */
{ // use brackets to show scope where mutex is locked
std::lock_guard<std::mutex> lock(mtx);
std::cout << "Some result you want to print";
}
}
And link to doc: https://en.cppreference.com/w/cpp/thread/lock_guard
Also, at first, I found that only using "#pragma omp parallel for"
results in the error that occurs when LDLT is not initialized.
Please provide sample code with your implementation of #pramgma omp... so it will be easier to answer your question.

Related

What can prevent multiprocessing from improving speed - OpenMP?

I am scanning through every permutation of vectors and I would like to multithread this process (each thread would scan all the permutation of some vectors).
I manage to extract the code that would not speed up (I know it does not do anything useful but it reproduces my problem).
int main(int argc, char *argv[]){
std::vector<std::string *> myVector;
for(int i = 0 ; i < 8 ; ++i){
myVector.push_back(new std::string("myString" + std::to_string(i)));
}
std::sort(myVector.begin(), myVector.end());
omp_set_dynamic(0);
omp_set_num_threads(8);
#pragma omp parallel for shared(myVector)
for(int i = 0 ; i < 100 ; ++i){
std::vector<std::string*> test(myVector);
do{ //here is a permutation
} while(std::next_permutation(test.begin(), test.end())); // tests all the permutations of this combination
}
return 0;
}
The result is :
1 thread : 15 seconds
2 threads : 8 seconds
4 threads : 15 seconds
8 threads : 18 seconds
16 threads : 20 seconds
I am working with an i7 processor with 8 cores. I can't understand how it could be slower with 8 threads than with 1... I don't think the cost of creating new threads is higher than the one to go through 40320 permutations.. so what is happening?
Thanks to the help of everyone, I finally manage to find the answer :
There were two problems :
A quick performance profiling showed that most of the time was spent in std::lockit which is something used for debug on visual studio.. to prevent that just add this command line /D "_HAS_ITERATOR_DEBUGGING=0" /D "_SECURE_SCL=0". That was why adding more threads resulted in loss of time
Switching optimization on helped improve the performance

No speedup for vector sums with threading

I have a C++ program which basically performs some matrix calculations. For these I use LAPACK/BLAS and usually link to the MKL or ACML depending on the platform. A lot of these matrix calculations operate on different independent matrices and hence I use std::thread's to let these operations run in parallel. However, I noticed that I have no speed-up when using more threads. I traced the problem down to the daxpy Blas routine. It seems that if two threads are using this routine in parallel each thread takes twice the time, even though the two threads operate on different arrays.
The next thing I tried was writing a new simple method to perform vector additions to replace the daxpy routine. With one thread this new method is as fast as the BLAS routine, but, when compiling with gcc, it suffers from the same problems as the BLAS routine: doubling the number of threads running parallel also doubles the amount of time each threads needs, so no speed-up is gained. However, using the Intel C++ Compiler this problems vanishes: with increasing number of threads the time a single thread needs is constant.
However, I need to compile as well on systems where no Intel compiler is available. So my questions are: why is there no speed-up with the gcc and is there any possibility of improving the gcc performance?
I wrote a small program to demonstrate the effect:
// $(CC) -std=c++11 -O2 threadmatrixsum.cpp -o threadmatrixsum -pthread
#include <iostream>
#include <thread>
#include <vector>
#include "boost/date_time/posix_time/posix_time.hpp"
#include "boost/timer.hpp"
void simplesum(double* a, double* b, std::size_t dim);
int main() {
for (std::size_t num_threads {1}; num_threads <= 4; num_threads++) {
const std::size_t N { 936 };
std::vector <std::size_t> times(num_threads, 0);
auto threadfunction = [&](std::size_t tid)
{
const std::size_t dim { N * N };
double* pA = new double[dim];
double* pB = new double[dim];
for (std::size_t i {0}; i < N; ++i){
pA[i] = i;
pB[i] = 2*i;
}
boost::posix_time::ptime now1 =
boost::posix_time::microsec_clock::universal_time();
for (std::size_t n{0}; n < 1000; ++n){
simplesum(pA, pB, dim);
}
boost::posix_time::ptime now2 =
boost::posix_time::microsec_clock::universal_time();
boost::posix_time::time_duration dur = now2 - now1;
times[tid] += dur.total_milliseconds();
delete[] pA;
delete[] pB;
};
std::vector <std::thread> mythreads;
// start threads
for (std::size_t n {0} ; n < num_threads; ++n)
{
mythreads.emplace_back(threadfunction, n);
}
// wait for threads to finish
for (std::size_t n {0} ; n < num_threads; ++n)
{
mythreads[n].join();
std::cout << " Thread " << n+1 << " of " << num_threads
<< " took " << times[n]<< "msec" << std::endl;
}
}
}
void simplesum(double* a, double* b, std::size_t dim){
for(std::size_t i{0}; i < dim; ++i)
{*(++a) += *(++b);}
}
The outout with gcc:
Thread 1 of 1 took 532msec
Thread 1 of 2 took 1104msec
Thread 2 of 2 took 1103msec
Thread 1 of 3 took 1680msec
Thread 2 of 3 took 1821msec
Thread 3 of 3 took 1808msec
Thread 1 of 4 took 2542msec
Thread 2 of 4 took 2536msec
Thread 3 of 4 took 2509msec
Thread 4 of 4 took 2515msec
The outout with icc:
Thread 1 of 1 took 663msec
Thread 1 of 2 took 674msec
Thread 2 of 2 took 674msec
Thread 1 of 3 took 681msec
Thread 2 of 3 took 681msec
Thread 3 of 3 took 681msec
Thread 1 of 4 took 688msec
Thread 2 of 4 took 689msec
Thread 3 of 4 took 687msec
Thread 4 of 4 took 688msec
So, with the icc the time needed for one thread perform the computations is constant (as I would have expected; my CPU has 4 physical cores) and with the gcc the time for one thread increases. Replacing the simplesum routine by BLAS::daxpy yields the same results for icc and gcc (no surprise, as most time is spent in the library), which are almost the same as the above stated gcc results.
The answer is fairly simple: Your threads are fighting for memory bandwidth!
Consider that you perform one floating point addition per 2 stores (one initialization, one after the addition) and 2 reads (in the addition). Most modern systems providing multiple cpus actually have to share the memory controller among several cores.
The following was run on a system with 2 physical CPU sockets and 12 cores (24 with HT). Your original code exhibits exactly your problem:
Thread 1 of 1 took 657msec
Thread 1 of 2 took 1447msec
Thread 2 of 2 took 1463msec
[...]
Thread 1 of 8 took 5516msec
Thread 2 of 8 took 5587msec
Thread 3 of 8 took 5205msec
Thread 4 of 8 took 5311msec
Thread 5 of 8 took 2731msec
Thread 6 of 8 took 5545msec
Thread 7 of 8 took 5551msec
Thread 8 of 8 took 4903msec
However, by simply increasing the arithmetic density, we can see a significant increase in scalability. To demonstrate, I changed your addition routine to also perform an exponentiation: *(++a) += std::exp(*(++b));. The result shows almost perfect scaling:
Thread 1 of 1 took 7671msec
Thread 1 of 2 took 7759msec
Thread 2 of 2 took 7759msec
[...]
Thread 1 of 8 took 9997msec
Thread 2 of 8 took 8135msec
Thread 3 of 8 took 10625msec
Thread 4 of 8 took 8169msec
Thread 5 of 8 took 10054msec
Thread 6 of 8 took 8242msec
Thread 7 of 8 took 9876msec
Thread 8 of 8 took 8819msec
But what about ICC?
First, ICC inlines simplesum. Proving that inlining happens is simple: Using icc, I have disable multi-file interprocedural optimization and moved simplesum into its own translation unit. The difference is astonishing. The performance went from
Thread 1 of 1 took 687msec
Thread 1 of 2 took 688msec
Thread 2 of 2 took 689msec
[...]
Thread 1 of 8 took 690msec
Thread 2 of 8 took 697msec
Thread 3 of 8 took 700msec
Thread 4 of 8 took 874msec
Thread 5 of 8 took 878msec
Thread 6 of 8 took 874msec
Thread 7 of 8 took 742msec
Thread 8 of 8 took 868msec
To
Thread 1 of 1 took 1278msec
Thread 1 of 2 took 2457msec
Thread 2 of 2 took 2445msec
[...]
Thread 1 of 8 took 8868msec
Thread 2 of 8 took 8434msec
Thread 3 of 8 took 7964msec
Thread 4 of 8 took 7951msec
Thread 5 of 8 took 8872msec
Thread 6 of 8 took 8286msec
Thread 7 of 8 took 5714msec
Thread 8 of 8 took 8241msec
This already explains why the library performs badly: ICC cannot inline it and therefore no matter what else causes ICC to perform better than g++, it will not happen.
It also gives a hint as to what ICC might be doing right here... What if instead of executing simplesum 1000 times, it interchanges the loops so that it
Loads two doubles
Adds them 1000 times (or even performs a = 1000 * b)
Stores two doubles
This would increase arithmetic density without adding any exponentials to the function... How to prove this? Well, to begin let us simply implement this optimization and see what happens! To analyse, we will look at the g++ performance. Recall our benchmark results:
Thread 1 of 1 took 640msec
Thread 1 of 2 took 1308msec
Thread 2 of 2 took 1304msec
[...]
Thread 1 of 8 took 5294msec
Thread 2 of 8 took 5370msec
Thread 3 of 8 took 5451msec
Thread 4 of 8 took 5527msec
Thread 5 of 8 took 5174msec
Thread 6 of 8 took 5464msec
Thread 7 of 8 took 4640msec
Thread 8 of 8 took 4055msec
And now, let us exchange
for (std::size_t n{0}; n < 1000; ++n){
simplesum(pA, pB, dim);
}
with the version in which the inner loop was made the outer loop:
double* a = pA; double* b = pB;
for(std::size_t i{0}; i < dim; ++i, ++a, ++b)
{
double x = *a, y = *b;
for (std::size_t n{0}; n < 1000; ++n)
{
x += y;
}
*a = x;
}
The results show that we are on the right track:
Thread 1 of 1 took 693msec
Thread 1 of 2 took 703msec
Thread 2 of 2 took 700msec
[...]
Thread 1 of 8 took 920msec
Thread 2 of 8 took 804msec
Thread 3 of 8 took 750msec
Thread 4 of 8 took 943msec
Thread 5 of 8 took 909msec
Thread 6 of 8 took 744msec
Thread 7 of 8 took 759msec
Thread 8 of 8 took 904msec
This proves that the loop interchange optimization is indeed the main source of the excellent performance ICC exhibits here.
Note that none of the tested compilers (MSVC, ICC, g++ and clang) will replace the loop with a multiplication, which improves performance by 200x in the single threaded and 15x in the 8-threaded cases. This is due to the fact that the numerical instability of the repeated additions may cause wildly differing results when replaced with a single multiplication. When testing with integer data types instead of floating point data types, this optimization happens.
How can we force g++ to perform this optimization?
Interestingly enough, the true killer for g++ is not an inability to perform loop interchange. When called with -floop-interchange, g++ can perform this optimization as well. But only when the odds are significantly stacked into its favor.
Instead of std::size_t all bounds were expressed as ints. Not long, not unsigned int, but int. I still find it hard to believe, but it seems this is a hard requirement.
Instead of incrementing pointers, index them: a[i] += b[i];
G++ needs to be told -floop-interchange. A simple -O3 is not enough.
When all three criteria are met, the g++ performance is similar to what ICC delivers:
Thread 1 of 1 took 714msec
Thread 1 of 2 took 724msec
Thread 2 of 2 took 721msec
[...]
Thread 1 of 8 took 782msec
Thread 2 of 8 took 1221msec
Thread 3 of 8 took 1225msec
Thread 4 of 8 took 781msec
Thread 5 of 8 took 788msec
Thread 6 of 8 took 1262msec
Thread 7 of 8 took 1226msec
Thread 8 of 8 took 820msec
Note: The version of g++ used in this experiment is 4.9.0 on a x64 Arch linux.
Ok, I came to the conclusion that the main problem is that the processor acts on different parts of the memory in parallel and hence I assume that one has to deal with lots of cache misses which slows the process further down. Putting the actual sum function in a critical section
summutex.lock();
simplesum(pA, pB, dim);
summutex.unlock();
solves the problem of the cache missses, but of course does not yield optimal speed-up. Anyway, because now the other threads are blocked the simplesum method might as well use all available threads for the sum
void simplesum(double* a, double* b, std::size_t dim, std::size_t numberofthreads){
omp_set_num_threads(numberofthreads);
#pragma omp parallel
{
#pragma omp for
for(std::size_t i = 0; i < dim; ++i)
{
a[i]+=b[i];
}
}
}
In this case all the threads work on the same chunk on memory: it should be in the processor cache and if the processor needs to load some other parts of the memory into its cache the other threads benefit from this all well (depending whether this is L1 or L2 cache, but I reckon the details do not really matter for the sake of this discussion).
I don't claim that this solution is perfect or anywhere near optimal, but it seems to work much better than the original code. And it does not rely on some loop switching tricks which I cannot do in my actual code.

concatenate two vectors for divide and conquer algorithm c++

I'm trying to get a divide and conquer algorithm working that returns a vector, and since it's a divide and conquer algorithm it obviously needs to run more than one instance at a time. The problem is when it comes down to the return portion.
Right now I have:
vector<MyObject> MyProgram(...){
...Code that's not important...
return MyProgram(...) + MyProgram(...);
}
unfortunately apparently I can't just use the + operand. I know that you can concatenate vectors by inserting one into the other, or copying one into the other, but then MyProgram would be getting called one after the other, not simultaneously.
I'm literally guessing that this is what you're trying to accomplish, but it is conjecture at best, so let me know whether this answer should be deleted due to being inapplicable.
The following defines a function that returns an empty vector if the argument is zero. Otherwise is returns a vector of N instances of the value N concatenated with the function evaluated at N-1. The concatenation is done asynchronously via a separate thread, thereby giving you the potential for concurrency.
#include <iostream>
#include <vector>
#include <future>
std::vector<unsigned int> MyFunction(unsigned arg)
{
std::vector<unsigned int> result;
if (arg == 0)
return result;
auto fc = std::async(std::launch::async, MyFunction, arg-1);
for (unsigned int i=0; i<arg; ++i)
result.emplace_back(arg);
std::vector<unsigned int> theirs = fc.get();
result.insert(result.begin(), theirs.begin(), theirs.end());
return result;
}
int main()
{
std::vector<unsigned int> res = MyFunction(8);
for (auto x : res)
std::cout << x << ' ';
std::cout << std::endl;
return 0;
}
Output
1 2 2 3 3 3 4 4 4 4 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8
Each miniature sequence is generated on a separate thread except the first one, in this case the sequence of 8's, generated on the primary thread.
Anyway, I hope it gives you some ideas.
Use a wrapper. Now just add an overloaded operator+ to that class.
Such a Wrapper can look like this:
class Wrapper
{
public:
Wrapper(const vector<MyObject>& _vec) : vec(_vec) {}
Wrapper operator+(Wrapper& rhs)
{
vector<MyObject> tmp(vec);
tmp.insert(tmp.end(), rhs.vec.begin(), rhs.vec.end());
return Wrapper(tmp);
}
operator const vector<MyObject>&(){return vec;}
private:
vector<MyObject> vec;
};
Your function would then be like:
Wrapper MyProgram(...){
...Code that's not important...
return MyProgram(...) + MyProgram(...);
}
Please check the comments for important warnings about your current design.

Replacing each number within an array with the maximum number up to that point

#include<iostream.h>
#include<fstream.h>
ifstream f("date.in");
using namespace std;
int i;
int P(int a[100],int k,int max)
{
max=a[1];
for(i=2;i<=k;i++)
if(a[i]>max)
max=a[i];
return max;
}
int main()
{
int x,a[100],n;
f>>n;
for(i=1;i<=n;i++)
f>>a[i];
for(i=2;i<=n;i++)
a[i]=P(a,i,x);
for(i=1;i<=n;i++)
cout<<a[i]<<" ";
}
My "date.in" file consists of the following :
12
4 6 3 7 8 1 6 2 7 9 10 8
As the title states, the program should modify the array from within the file such that each number has the maximum value found in the array up to, and including, the position of that respective number. I've gone through it a hundred times but cannot figure out what's wrong with my code.
When compiled, I get the following:
4 6 3 7 8 8 6 8 7 9 10 10
Any assistance would be appreciated.
int i;
Globals are usually a bad idea. Because this loop:
for(i=2;i<=n;i++)
a[i]=P(a,i,x);
and this loop:
for(i=2;i<=k;i++)
if(a[i]>max)
max=a[i];
are running "at the same time", and thus i in the first one is NOT counting from 2 to n properly, it's only actually getting the first index and then the even indexes. (Check your results, the even indexes are 100% correct: x 6 x 7 x 8 x 8 x 9 x 10). If you use counters local to each loop: for(int i=2; ... then this problem wouldn't be happening.
Also your entire design is slow. Not sure why you did it that way, because it can be done easily in a single pass: http://ideone.com/LmD0HX.
And use <iostream> not <iostream.h>. They're actually different files.

Windows Critical Section strange behaviour

I have two shared global variables
int a = 0;
int b = 0;
and two threads
// thread 1
for (int i = 0; i < 10; ++i) {
EnterCriticalSection(&sect);
a++;
b++;
std::cout << a " " << b << std::endl;
LeaveCriticalSection(&sect);
}
// thread2
for (int i = 0; i < 10; ++i) {
EnterCriticalSection(&sect);
a--;
b--;
std::cout << a " " << b << std::endl;
LeaveCriticalSection(&sect);
}
The code always prints the following output
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
That is quite strange, looks like threads are working sequentally.. What's the problem with that?
Thanks.
Each thread has a specific time slice during which it executes before being preempted. In your example, the time slice seems to be longer than the time required to complete the loop.
However, you can actively yield control by calling Sleep(0) after leaving the critical section inside the loop.
IMO critical section leave/enter in your example is so fast that another thread is not fast enough to execute enter section during this moment.
Try to put some (maybe random) sleeps to slow down code to see desired effects.
Note:
Default timeout for EnterCriticalSection is like 30 days or so (means infinty) so you cannot expect that function will time out. And documentation says:
There is no guarantee about the order in which threads will obtain ownership of the critical section, however, the system will be fair to all threads.
For me it looks like the topic discussed in http://social.msdn.microsoft.com/forums/en-US/windowssdk/thread/980e5018-3ade-4823-a6dc-5ddbcc3091d5/
Please look the example from June 28, 2006
(unfortunately I cannot find the original article by Microsoft telling about the change of CriticalSection)
Could you try your code on Windows XP? What does it show?
I guess that I/O operations (cout) affect the scheduling similarly to a Sleep() call, so starting with Windows Vista a thread could cause starvation of other threads when doing I/O inside a CS.