Linking MKL library in C++ - c++

I'm having a lot of trouble linking the Intel MKL libraries to my code in C++. I downloaded the MKL library from this link:
https://software.intel.com/content/www/us/en/develop/tools/math-kernel-library/choose-download/windows.html
Then it says to use MKL Link Line Advisor to obtain the proper compiler options, which I'm having a lot of trouble figuring out. For reference, I'm using Windows, g++ 8.1.0 and MinGW-W64. Here is part by part:
Intel product: Intel MKL 2020, since I just downloaded it.
OS: Windows, no issues there.
Compiler: Intel(R) Fortran? I'm using g++ to compile my C++ code, so
I have no idea since that option is not available. Doing some
research in stackoverflow, it seems the right choice is Intel(R)
Fortran
Architecture: Intel(R) 64, since I have a 64-bit OS?
Dynamic/Static Linking: Static linking I guess?
Interface layer: 64-bit, since I have a 64-bit OS?
Threading layer: OpenMP, since my current C++ code uses -fopenmp?
OpenMP library: Intel(R) libiomp5. Only one option, so no issues
here.
Fortran95 Interfaces: BLAS95 and LAPACK95
The above choices give me the following compiler options
/4I8 /module:"%MKLROOT%"\include\intel64/ilp64 -I"%MKLROOT%"\include
And this results in error from the compiler:
/4I8: No such file or directory
Can somebody help me please?

I think you should use -DMKL_ILP64 -I"%MKLROOT%"\include for g++.
You should definitely not pick Intel(R) Fortran for the compiler since you are not compiling Fortran programs.

Related

Difference between the GNU ARM Embedded Toolchain and normal gcc/g++ with bare metal ARM architecture

I have been using the GNU ARM Embedded Toolchain for a while and compiling my embedded C++ code using arm-none-eabi-g++, because it is what we did in my embedded systems university courses. For my computer science courses we used just g++ to compile C++ code. I have been poking around the GCC manual and found that there are ARM architecture compilation options for GCC. My question is what is the difference between using arm-none-eabi-g++binary provided by ARM and g++ with the -mcpu=cortex-m4 -march=armv7compile option for cross-compiling? It appears you can cross-compile for ARM using gcc (gcc that comes with Ubuntu).
I think I figured it out. So using GCC you can build a cross compiler and an associated toolchain. ARM built their own cross compiler and put it up for people to use as the "Official GNU ARM Embedded Toolchain". It's basically a meta "I used the compiler to build the compiler problem". The options -mcpu=cortex-m4 -march=armv7 I was seeing was for targeting architectures when building GCC, not to be when compiling.

To exclude /usr/include/c++/4.3/ in compiling code with intel compiler

I am working on a cluster which has older version of intel compiler (11) and gcc (4.3).
I have installed a newer trial version of intel composer xe (with 14.0 compiler). I have also installed gcc 4.9. Both the newer gcc and intel compilers are in my home directory (non-root)
I use C++11 in my codes, so obviously i use -std=c++11 flag for compiling. I provide -L and -I flags to include intel's includes and libraries in my makefile
When I try to compile my code with icpc, The compiler looks in /user/include/c++/4.3/.... path.
I tried to remove the path by setting C_INCLUDE_PATH and CPLUS_INCLUDE_PATH to
/home/peter/intel/composerxe/include.
But it still looks in the /usr/include/c++ path. Because of this, the old files in /user/include/c++/4.3/tr1_impl/ ... are included and not the latest ones.
How do I stop the intel compiler to look into these paths and see the new ones. Now instead of intel compiler i use gcc4.9, what changes do i need to make ?
I tried adding -nostdinc flag for compiling, but no luck. It gives error as :
catastrophic error cannot open source file "iostream"
because 1st include header is iostream
I figured out that intel shares its headers from already installed gcc in the system and takes the /usr/include or usr/local/include paths.
In my case, the operating system on our cluster is SUSE linux with gcc4.3 as default GNU compiler.
Now, the cluster management software is set such that two compilers (intel and gcc) cannot be loaded simultaneously (module intel version 11 and module gcc version 4.5)
After I did a gcc --print-search-dirs I found out that the intel compiler inherits some common headers (iostream, etc) from existing gcc compiler (which is 4.3 in this case) and hence includes the old tr1_imple files and headers because of which I get errors in compiling c++11 code.
I did install gcc 4.9 in my home directory, but intel 14 has compatibility issues in including headers from gcc 4.9 and again this time weird compiler errors.
While using gcc 4.5 (using module load gcc-4.5) atleast got me to compile the code, I think the only solution to this problem would be to use gcc 4.7.2 and provide its include path since that was the first release with c++11 support ( not c++0x working draft)implementing a lot of c++11 features except concurrency.
One needs to know that Intel compilers do depend upon existing gcc compiler provided by the OS or installed by the user/root. Without that, they are pretty much of no use.
Cheers!

How can I create the project in Eclipse-nsight which use both Intel C++ and CUDA C++?

I want to use ICC (Intel C++ Compiler) with CUDA NVCC (nVidia C++ Compiler) on Linux in the Eclipse-nsight.
I installed CUDA 5.5 with Eclipse-nsight and Intel Cluster Studio 2013 XE
and then I installed plug-ins (ICC toolchain) in the Eclipse-nsight via menu Help-> Install New Software ... https://software.intel.com/en-us/articles/intel-c-compiler-for-linux-using-intel-compilers-with-the-eclipse-ide-pdf
Now I can create Intel C++ project or CUDA C++ project. But how can I create the project which can contain at the same time both. Cpp-files and. Cu-file, and which automatic compile Cpp-files by using ICC and Cu-file by using NVCC?
Which project should I create: Intel C++ or CUDA C++, and what should I do then?
(Or may be someone known how can I create the project in Eclipse-nsight which use both GCC and ICC, and it will help me)
In CUDA project properties, specify the ccbin NVCC command line flag.

Compiling boost as i386 on AMD64 (Ubuntu 11.10)

I'm currently programming an extension to a program, which only supports
i386 (and I am running amd64 Ubuntu 11.10). Whenever I compile my extension source
I need to use the -m32 flag to force 32 bit architecture (otherwise the program will not be able to load my extension). Sooner or later it is inevitable to avoid boost
thanks to its huge and stable library, which leads to my problem.
I want to use the boost filesystem, which uses OS specific function calls, which in turn leads to the requirement of a library file instead of only a header implementation. The problem is; I can't/don't know how to setup the boost filesystem (i386 version) on my amd64 machine. If I download a prebuilt (.deb) package for i386 and install it using -force-architecture it still fails complaining about dependencies.
So basically; how do I setup boost with 32bit (i386) architecture on my (amd64) system?
It seems as if I did it right all along but I was too dumb to realize how to properly link libraries with the GCC linker, coming from a Windows environment. You can easily compile boost libraries by using the -m32 flag and by setting up bjam properly. See the first answer in this question for details: How do I force a 32 bit build of boost with gcc?

Are Static Library Machine Independent?

Well, I am Developing a program in C++ in an Ubuntu 10.04.1 (Intel Core2Quad) LTS, but the releases are running in a Debian 5.0.5 (Intel(R) Xeon(R) CPU). Some libraries such as crypto++ or mysqlclient have different versions in both OS. So I decided to compile the binary statically with all the libraries statically compiled in the Ubuntu and then upload the completed binary to the Debian.
I am not sure if this method is correct, because the static libs maybe are architecture-dependent and maybe can get in conflict in the Debian Machine. If I want to use the new library version of Ubuntu in the Debian, should I compile them in the Debian?
Thanks in advance
They're architecture dependant. Usually though, library gets compiled to a common architecture on x86 machines, such as i686 which will run fine on both an Intel Xeon and a Intel Core2Quad (But not on e.g. an old Intel Pentium processor)
No, it is not machine independent. The only difference is that all libraries are bundled with the executable, so there is no risk for the program to fail on load with a "library not found" message. In summary, it will works for all linux distributions, but it will not work for Windows, for example.