I am trying to enable Intel MKL (2023.0.0) on Eigen C++ (3.4.0) library using Visual Studio++, so far I am able to run Eigen library in Visual Studio 2022 with no issues.
But as stated on this other thread and official Eigen documentation, when I try to enable Intel MKL by adding
#define EIGEN_USE_BLAS
#include <Eigen/src/Core/util/MKL_support.h>
I manage to compile and execute, but throws this error
Intel MKL ERROR: Parameter 6 was incorrect on entry to SGEMV .
Intel MKL ERROR: Parameter 2 was incorrect on entry to SGEMV .
To enable Intel MKL on my Visual Studio project I added the include directories and library directories as stated here and the Intel MKL linker advisor, the libraries that the linker advisor recommends are
mkl_intel_ilp64.lib mkl_sequential.lib mkl_core.lib
For those who are curious, the line of code that throws this error is
Eigen::EigenSolver<Eigen::MatrixXf> es(eig_cov);
I am basically trying to compute the eigen vectors for a matrix of 8192x8192 (the matrix is symmetric and has been validated with other libraries like OpenCV and Armadillo, they just take so long, I have another thread about this in case you are curious)
Since my program is able to compile and execute and fail at the line above, I would say I have my dev environment setup but I am not really sure why Intel MKL is not liking Eigen, any pointers will be appreciated
PD: I tried to use EIGEN_USE_MKL_ALL instead of EIGEN_USE_BLAS, but I open a can of worms, looks more scary to solve that way.
Related
I am trying to write Open CL programs in C++ using G++ compiler in Windows 10 but I am not able to find any SDK for my work.
Nvidia CUDA requires Visual Studio compilers to work and AMD AMP SDK seems to be discontinued saying that the libraries are included in the driver itself.
My PC has both AMD and Nvidia GPUs so any of the implementation should be fine with OpenCL. Can anyone suggest how can I carry on and also kindly clarify on how to use the libraries present in OpenCL driver in my C++ program as mentioned by AMD if possible?
Edit :
I found out that OpenCL libraries are already present in Windows as,
C:\Windows\System32\OpenCL.dll
We only need headers to compile our program using g++. It can be done as shown below.
Install OpenCL headers from below,
https://packages.msys2.org/package/mingw-w64-x86_64-opencl-headers
Once headers are present in include directory of MinGW64, I wrote my program normally and compiled the program using the below g++ command.
g++ main.cpp C:\Windows\System32\OpenCL.dll -o main.exe
And that's it. It worked !!!
http://arkanis.de/weblog/2014-11-25-minimal-opencl-development-on-windows was of great help to understand the OpenCL library implementation in Windows.
You don't need to install anything besides Visual Studio Community with the C++ compiler, and GPU drivers (these already contain the OpenCL runtimes).
For OpenCL development, you only need the OpenCL header files and the lib file. To setup Visual Studio, see here. This works for any OpenCL device, including Nvidia/AMD/Intel GPUs and even Intel CPUs if the CPU runtime is installed.
Alternatively, you can use my lightweight OpenCL-Wrapper. This comes with all Visual Studio settings already in the project file. OpenCL learning and developing with the wrapper is so much simpler than with the cumbersome OpenCL bindings directly.
The question in the header summarizes what I aim to achieve, which is more precisely detailed as below.
The goal is to compile C++ based mex files that rely on Intel MKL function calls (e.g. matrix inverse calculation).
In order to do so, I would like to ensure that I use the exact same Intel MKL libraries which MATLAB is shipped with, so as to avoid any compatibility issues. In this particular case, this is:
>> version('-blas')
ans =
'Intel(R) Math Kernel Library Version 2018.0.3 Product Build 20180406 for Intel(R) 64 architecture applications, CNR branch AVX
'
>> version('-lapack')
ans =
'Intel(R) Math Kernel Library Version 2018.0.3 Product Build 20180406 for Intel(R) 64 architecture applications, CNR branch AVX
Linear Algebra PACKage Version 3.7.0
'
Warning: the above Intel MKL BLAS & LAPACK are not the same as the ones that are available for download from Intel’s official website. The latter ones I would prefer not to use for the above-mentioned potential compatibility reasons.
In which MATLAB folder(s) are the above reference static/dynamic Intel MKL libraries are located?
I have extensively searched after them in the many MATLAB folders, but I unfortunately could not find them. It seems that they are ‘buried’ somewhere deep in MATLAB.
How is it possible to do this at all?
My setup: Windows 10, MATLAB R2091b, Intel MKL.
I am very grateful for any help. Thank you in advance.
On my Win64 machine I find them here
[matlabroot '/extern/lib/win64/microsoft']
and here
[matlabroot '/extern/lib/win64/mingw64']
The BLAS library is named libmwblas.lib, and the LAPACK library is named libmwlapack.lib
For reference, note that in R2007a and earlier, The Mathworks shipped the BLAS and LAPACK libraries as a single combined library. They weren't shipped as two separate libraries until R2007b and later.
Lots of questions out there on trying to get NVCC to use the intel compiler. It doesn't work, I get that.
The most common answer that people give is to compile the device code into a library using NVCC/cl.exe and then compile the host code separately and link them. I'm attempting this, but am getting nowhere.
In VS2012 I have created a solution with 2 projects - one CUDA, the other a console application.
I have set the CUDA project to compile with VS2012 into a static library. It compiles no problem.
I have set the console application to intel 14.0 and to compile as an exe. I have also added the correct path to "Additional Library Dependencies" and have told the compiler about the CUDA library through "Additional Dependencies" (where I also told it about cudart_static.lib).
Build dependency is also set to compile the CUDA project first.
However, this setup is no good. Gives me an error which even google is at a loss for:
Error 5 error MSB4057: The target "ComputeLegacyManifestEmbedding" does not exist in the project. C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Platforms\Win32\PlatformToolsets\Intel C++ Compiler XE 14.0\Toolset.targets 1162 7 rxnCalc_cpp
To verify that the linking is ok, if I set both projects to compile via VS2012 I get no problems.
OS - Windows 7 64bit (32bit application though)
Platform - VS2012
Cuda Toolkit - 6.0
Cuda Compute Version - 5.0 (and compiled as such)
So, am I just wasting my time or is there something I'm missing? It seems I have gone through a hundred posts, but I have yet to see a single success. Lots of people anxious to tell you that this is what you should do, but no one to tell you how to do it!
For everyone out there using windows and trying to get CUDA and the intel compiler to co-operate, see my initial question on how I set up the solution.
To get it to work, as per Roger Dahl's suggestion, I changed the CUDA project to a DLL.
This involved the following modifications:
Change CUDA project to dll
Add __declspec(dllexport) to CUDA wrapper function
Point console linker to the DLL lib file
This works and I am now able to utilize all intel compiler optimizations.
However, please note, I did need to set the intel compiler to only do single file IPO. Multi file IPO will cause errors, this was somewhat expected.
Hope this helps others in the same boat.
I have installed Intel Parallel Studio XE 2013 in addition to Visual Studio 2012 on a 32bit Windows 7 machine. I have tried to build Boost 1.53 with Intel compiler by following the instructions in the link. I have this error:
.\boost/config/select_stdlib_config.hpp(18): catastrophic error:
cannot open source file "cstddef"
Is there anyone else who had the same problem? I would welcome any advice to link standard libraries of ICC to boost build process.
Thanks in advance.
After an intensive search, finally I have found the solution. As explained in this link, there are two patches to apply to boost folder:
intel-win.jam file in [boost-source-directory]\tools\build\v2\tools needs to be replaced by the file given in the link.
project-config.jam needs to be replaced by intel-user-config.jam given in the link and build command should be changed to:
b2 --user-config=intel-user-config.jam --toolset=intel
Note that intel compiler version number may need to be modified according to your existing ICC installation in the file intel-user-config.jam.
Run "./bootstrap.sh --with-toolset=intel-linux" and "b2 install" will use intel-linux.compile.c++ (boost_1_64_0).
The accepted answer to the topic in this link solved a similar problem for me, which pertained to Intel Compiler 17.0 Update 5 and Visual Studio 17.
You need to change a couple of lines in tools/build/src/tools/intel-win.jam
Note, the build proceeds with a number of warnings.
I am trying to install and use Armadillo with OpenBlas on Windows8 / Visual Studio.
I managed to install Armadillo and run a small program without problem. However it is using the default supplied-with-Armadillo 32 bit LAPACK and BLAS binaries which are too slow. I witnessed on linux that the speed improvement when using OpenBlas (compared to default Armadillo or BLAS) is huge when multiplying large matrices and would like to be able to use it on Win8 as well.
When I try to replace BLAS lib with OpenBlas from there:
http://sourceforge.net/projects/openblas/files/v0.2.8/
and try to link with libopenblas.lib, I get a lot of blas-related unresolved external (again, not the case when I link with the armadillo supplied blas_win32_MT.lib) and really have no idea how to solve the problem.
Any help much appreciated.
JPN