how to use lapack under windows - c++

I want to use lapack and make C++ matrix wrapper for it, but lapack is written in Fortran, there are some clapack but I want to use it from source. compile firstly *.f and *.cpp files to object files then link it into a application..
the following apps and sources that I have.
visual studio proff edition,dev c++,ultimate++,mingw whatever
g95 and gfortran (under mingw) compiler
lapack (latest source)
blas (included in lapack)
How can I make an application please help...
My Operating System is Windows 7 and CPU Core2Duo and I dont have Intel math kernel

You can use the official C bindings for LAPACK, and then build your C++ wrapper around that. That avoids the issue of having to worry about the Fortran calling conventions, and the C bindings are a bit friendlier for C/C++ programmers than calling the Fortran routines directly.
Additionally, you could use one of the C++ matrix libraries that are already available, instead of rolling your own. I recommend Eigen.
PS.: Eigen matrices/vectors have a data() member that allows calling LAPACK without having to make temporary copies.

Related

Multithreaded MKL + OpenMP compiled with GCC

My understanding, from reading the Intel MKL documentation and posts such as this--
Calling multithreaded MKL in from openmp parallel region --
is that building OpenMP parallelization into your own code AND MKL internal OpenMP for MKL functions such as DGESVD or DPOTRF is impossible unless building with the Intel compiler. For example, I have a large linear system I'd like to solve using MKL, but I'd also like to take advantage of parallelization to build the system matrix (my own code independent of MKL), in the same binary executable.
Intel states in the MKL documentation that 3rd party compilers "may have to disable multithreading" for MKL functions. So the options are:
openmp parallelization of your own code (standard #pragma omp ... etc) and single-thread calls to MKL
multi-thread calls to MKL functions ONLY, and single-threaded code everywhere else
use the Intel compiler (I would like to use gcc, so not an option for me)
parallelize both your code and MKL with Intel TBB? (not sure if this would work)
Of course, MKL ships with it's own openmp build libiomp*, which gcc can link against. Is it possible to use this library to achieve parallelization of your own code in addition to MKL functions? I assume some direct management of threads would be involved. However as far as I can tell there are no iomp dev headers included with MKL, which may answer that question (--> NO).
So it seems at this point like the only answer is Intel TBB (Thread Building Blocks). Just wondering if I'm missing something or if there's a clever workaround.
(Edit:) Another solution might be if MKL has an interface to accept custom C++11 lambda functions or other arbitrary code (e.g., containing nested for loops) for parallelization via whatever internal threading scheme is being used. So far I haven't seen anything like this.
Intel TBB will also enable better nested parallelism, which might help in some cases. If you want to enable GNU OpenMP with MKL, there are following options:
Dynamically Selecting the Interface and Threading Layer. Links against mkl_rt library and then
set env var MKL_THREADING_LAYER=GNU prior to loading MKL
or call mkl_set_threading_layer(MKL_THREADING_GNU);
Linking with Threading Libraries directly (though, the link has no mentioning of GNU OpenMP explicitly). This is not recommended when you are building a library, a plug-in, or an extension module (e.g. Python's package), which can be mixed with other components that might use MKL differently. Link against mkl_gnu_thread.

C++ templates and OpenBLAS

There exist C++ libraries such as Eigen or Boost::uBlas that implement matrix types and computations.
There also exist libraries such as LAPACK, Goto-BLAS, OpenBLAS and ATLAS that implement highly optimized dense matrix computations over floating-point types.
I was wondering whether some C++ libraries, perhaps through specialization, call OpenBLAS for the types supported by OpenBLAS. It would seem the best of both worlds.
I don't know about Boost::uBlas, but using the current version (3.3 or higher) of Eigen it is possible to link to "any F77 compatible BLAS or LAPACK libraries" so assuming OpenBLAS is F77 compatible, yes. See this for details.

Intel compiler with openCV how does it work

I have a C# project, a CLI/C++ project which wraps a C++ project.
I have compiled openCV 3.0.0 and I'm using full static link( *.lib without *.dll files)
since I want to use Intel IPP i want to use the intel Compiler in order to compile the C++ project.
at first I tought I will have problems since the OpenCV libs aren't compiled with the Intel compiler(they are compiled with VS2012 compiler).
So just for trying I switch the toolset flag to Intel Compiler 14.0 and it worked, I can run my app... and use openCV and IPP.
Can you please explain a little bit about why is this possible? is it the same for dynamic loading(*.dll)? or if I use openCV with dynamic loading I will need to compile them with Intel Compiler?
Thanks
Roughly. The compiler used to generate a given machine code does not really matters (except for the quality of the code). What is important is that some conventions are respected : format of object code files, naming external objects and ABI (the way functions are called).

How can I get the boost numeric bindings?

I have a file that needs to use boost numeric bindings's library. How can I get that binding library?
The following link seems not able to work anymore. The zipped file is corrupted.
http://mathema.tician.de/dl/software/boost-numeric-bindings/
And I hope I could use it in Window and Visual Studio tool.
I have a file that needs to use boost numeric bindings's library. How can I get that binding library?
You can grab the sources of the current version via
svn co http://svn.boost.org/svn/boost/sandbox/numeric_bindings
The version from http://mathema.tician.de/dl/software/boost-numeric-bindings/ is an older version with a different interface. You can grab the sources of that older version via
svn co http://svn.boost.org/svn/boost/sandbox/numeric_bindings-v1
And I hope I could use it in Window and Visual Studio tool.
You need a blas/lapack library in addition to the bindings. For windows, Intel MKL, AMD's ACML, clapack and atlas used to work, last time I checked. (You only need one of these, but note that atlas only implements blas and a small subset of lapack). These libraries have widely different performance and license conditions, but they all implement the same interface (more or less).
In general, the people at http://lists.boost.org/mailman/listinfo.cgi/ublas seem to be helpful, if you try to use the bindings (or ublas) and run into problems. But I'm not sure if any of them checks here at stackoverflow for potential users that ran into problems.

GSL blas routines slow in Visual Studio

I just installed GSL and BLAS on Visual Studio 2010 successfully using this guide:
However the matrix multiplications using cblas are ridicously slow. A friend on Linux had the same problem. Instead of linking via GSL to BLAS, he linked directly to cBLAS (I don't exactly understand what this means but maybe you do?) and it got about ten times as fast.
How can I do this in Visual Studio? In the file I downloaded I couldn't find any more files that I could build with Visual Studio.
BLAS was the fortran mathematics library of simple operations, like multiplying or adding vectors and matrices. It implemented the vector-vector, vector-matrix, and matrix-matrix operations.
Later, different libraries was created which do the same as original BLAS but with more performance. The interface was saved, so you can use any of BLAS-compatible library, e.g. from your CPU vendor.
This FAQ http://www.netlib.org/blas/faq.html has some libraries listed; wikipedia has another list: http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms
The only problem with GSL - is in using C language. Interface of BLAS may be converted to C in various ways (the problem is in fortran functions name translation to c functions name, e.g. fortran DGEMM may be called DGEMM or DGEMM in C). GSL uses CBLAS convention: cblas_ prefix, e.g. GEMM will be named cblas_gemm.
So, try some libraries, from the lists, and check, is there cblas_ function aliases in the library. If yes, gsl may use this library.