I wrote a C++ shared library that uses Intel MKL for BLAS operations, and it threads beautifully, using all 12 cores of the machine. I am now trying to use RCpp to call a function from my library, and I am finding that it is single threaded. As in, for the same data, when the same function is called from C++, it uses all 12 cores very quickly, whereas when Rcpp calls it, it is single threaded and takes much longer (but the results are consistent).
Intel MKL is dynamically linked to my library thusly:
Makefile:
LIBRARIES=-lpthread -Wl,--no-as-needed -L<directory>bin -liomp5 -L<bin_directory> -lmkl_mc3 -lmkl_intel_lp64 -lmkl_gnu_thread -ldl -lmkl_core -lm -DMKL_ILP64 -fopenmp
LFLAGS=-O3 -I/opt/intel/composer_xe_2015/mkl/include -std=c++0x -m64
#Compiles the shared library
g++ -fPIC -shared <cpp files> -oliblibrary.so $(LIBRARIES) -O3 -I/opt/intel/composer_xe_2015/mkl/include -std=c++0x -m64
#Compile a controller for R, so that it can be loaded as dyn.load()
PKG_LIBS='`Rscript -e "Rcpp:::LdFlags() $(LIBRARIES) $(LFLAGS)"`' \
PKG_CXXFLAGS='`Rscript -e "Rcpp:::CxxFlags()"` $(LIBRARIES) $(LFLAGS) ' \
R CMD SHLIB fastRPCA.cpp -o../bin/RProgram.so -L../bin -llibrary
Then I call it in R:
dyn.load("fastRPCA.so", local=FALSE);
Please note I would prefer not setting MKL as the BLAS/LAPACK alternative for R, so that when other people use this code they don't have to change it for all of R. As such, I am trying to just use it in the C code.
How can I make the program multithread in Rcpp just as it does when run outside of R?
Based on this discussion, I am concerned that this is not possible. However, I wanted to ask, because I believe that since Intel MKL uses the OpenMP, perhaps there was some way to make it work.
There are basically two rules for working with R code:
Create a package.
Follow rule 1.
You are making your life hard by ignoring these.
Moreover, there are a number of packages on CRAN happily using OpenMP -- study those. You need to know and learn about thread setting -- see eg the RhpcBLASctl package which does this.
Lastly, you can of course connect R directly with the MKL; see the gcbd package and its vignette.
Edit three years later: See this post for details on installing the MKL easily on a .deb system
Related
I'm trying to build this: https://github.com/hselasky/hpsat_generate
this is their makefile:
PROG_CXX=hpsat_generate
PREFIX?=/usr/local
MAN=
SRCS= hpsat_generate.cpp
BINDIR?=${PREFIX}/bin
.if defined(HAVE_DEBUG)
CFLAGS+= -g -O0
.endif
CFLAGS+= -I${PREFIX}/include
LDFLAGS+= -L${PREFIX}/lib -lgmp -lgmpxx
.include <bsd.prog.mk>
using just make . results in
Makefile:7: *** missing separator. Stop.
so after some searching I found that I need to use FreeBSD make, so I tried:
bmake . hpsat_generate
which complains that mergesort is not declared, which is a FreeBSD function so I can only assume it doesn't really includes it.
I tried finding a way to make it run but I'm out of ideas..
The Makefile requires some changes for Linux (and NetBSD). The bsd.prog.mk implementation that comes with bmake on Linux is slightly off, and requires a workaround to link the program correctly, also, on Linux you need libbsd:
These issues are fixed by PR #1.
Instead of running the makefile just execute this:
g++ -o sat sat.cpp -lgmp -lgmpxx -l:libbsd.a
you may need to install the gmp and libbsd libraries
environment A: centos7(same os) / gcc.7.3.1(higher gcc)
environment B: centos7(same os) / gcc.4.8.5(lower gcc)
I have built an C++ executable in environment A and run it in environment B.
I haven't had a problem so far, but could there be a problem with this approach?
Technically, building a static executable improves portability.
So compile your C++ files with g++ -O -Wall -Wextra -g and link the object files with g++ -static *.o
Alternatively, use dynamically only libc.so, not the C++ standard library. So link with g++ *.o -Bstatic your libraries -Bdynamic -lc
For details, read the documentation of GCC, of GNU binutils, perhaps of GNU bash, and the Program Library Howto and Drepper's paper How to write shared libraries
Notice that to run an executable (see elf(5) and execve(2) and other syscalls(2)....) you don't need a compiler.
You may want to use strace(1) and ldd(1) on machine B.
Don't forget to have many testcases.
Notice that GCC, Bash, Binutils, are free software. You are allowed to download and study their source code and improve it (there are licensing conditions when you redistribute an improved binary, read their GPL license)
You could have legal or licensing issues (e.g. if your C++ code uses Qt). You may need to consult a lawyer about them.
See also LinuxFromScratch.
I am currently compiling code on a HPC system that was set up by Cray. To call Fortran, C, and C++ compilers it is suggested to use ftn, cc, and CC compiler wrappers provided by Cray.
Now, I would like to know which options the ftn wrapper adds to the actual compiler call (in my case to ifort, but it should not matter). From working with MPI wrappers I know the option --showme to get this information:
> mpif90 --showme
pgf90 -I/opt/openmpi/pgi/ib/include -fast -I/opt/openmpi/pgi/ib/lib -L/opt/openmpi/pgi/ib/lib -lmpi_f90 -lmpi_f77 -lmpi -libverbs -lrt -lnsl -lutil -ldl -lm -lrt -lnsl -lutil
## example from another HPC system; MPI wrapper around Portland Fortran Group Compiler
I am locking for an option like --OPTION_TO_GET_APPENDED_FLAGS that provides the same information for the ftn wrapper
> ftn --OPTION_TO_GET_APPENDED_FLAGS
ifort -one_option -O2 -another_option
Because it is Friday afternoon local time all colleagues with knowledge on this topic left already for their weekend (as well as the cluster support team).
Thanks in advance for the answers.
On the Cray system I am using (Cray Linux Environment (CLE), 27th Apr. 2016), the appropriate option is -craype-verbose:
ftp -craype-verbose
> ifort -xCORE-AVX2 -static -D__CRAYXC [...]
It is written on the man page which I just scanned quickly before asking this question:
-craype-verbose
Print the command which is forwarded to compiler invocation.
I have a package that depends on Rcpp and uses two other libraries compiled from sub-directories in src/. The package builds fine on Mac OSX using a clang compiler. However, on an RStudio Ubuntu server, it fails to build. The build's first two steps (creating the static libraries in the sub directories to link in) work fine and I can see sensible build commands like the following taking place:
g++ -Wall -I../../inst/include/ --std=c++11 -lhts -L../htslib/ -lz -lm -c -o someLibFile.o someLibFile.cpp
However, in the very last step of the build process where it tries to build the Rcpp code and bind to the library, for some reason it appears to compleletey fail to put the compiler command in front (g++) and only outputs the second half of the command.
-o mypackage.so RcppExports.o temp.o -lhts -lpbbam -Lpbbam/ -L/htslib/ -Lpbbam/ -L/mnt/software/r/R/3.1.1/usr/lib/R/lib -lR
In contrast, on the Mac it builds just fine, appending clang++ and other flags in front of this final command:
clang++ -std=c++11 -dynamiclib -Wl,-headerpad_max_install_names -undefined dynamic_lookup -single_module -multiply_defined suppress -L/Library/Frameworks/R.framework/Resources/lib -L/usr/local/lib -o pbbamr.so LoadData.o RcppExports.o temp.o -lhts -lpbbam -Lpbbam/ -Lhtslib/ -Lpbbam/ -F/Library/Frameworks/R.framework/.. -framework R -Wl,-framework -Wl,CoreFoundation
How do I make it use the g++ compiler on Ubuntu at this step? I have a custom Makevars file, but it is there just to build the dependencies in the sub-directory, so I don't know why that would cause any problems (since it works on Mac OSX).
More Information
The compiler seems to be found if I delete my Makevars file. However, the Makevars file I am using is essentially a direct copy of the example given in the R extensions guide with one addition to enable C++11:
CXX_STD = CXX11
.PHONY: all mylibs
all: $(SHLIB)
$(SHLIB): mylibs
mylibs:
(cd subdir; make)
With the line CXX_STD removed, it does stick a compiler in front of the command.
Briefly:
What is your R installation? You should probably run the binaries provided by Michael via CRAN; they are based on my Debian upload; I run these too on a bunch of machines
The reason is that R 'remembers' its compile-time settings via $RHOME/etc/Makefconf. This should just be CXX=g+=.
When you install r-base-dev (from Ubuntu or the newer version from CRAN) you also get the build-essential package as well as all common dependencies. With that things just work.
If however you are doing something special or local, well then you have to deal with your local changes. The basic Ubuntu setup is used by thousands of people and daily jobs--including eg Travis builds for countless GitHub repos.
This is caused by using an outdated/unusual R installation which has poor support for C++11. The best way to resolve his is to upgrade to a more recent version of R, or use a standard R install (sudo apt-get install r-base-dev). A poor work around is described below.
Problems Cause and Bad Work Around
When writing R extension that use C++11, one often sets CXX_STD = CXX11 in the Makevars file or list SystemRequirements: C++11 in the DESCRIPTION file. These will trigger R to use the compiler set by the following flags in the Makeconf file (located at file.path(R.home(), "etc/Makeconf")).
CXX1X
CXX1XFLAGS
CXX1XPICFLAGS
CXX1XSTD
Note that some of these may be set in this file, but not all of them might be there indicating a problem. In the event there is a problem with these settings or they are not set, R appears to use the empty string "" as the compiler/linker for the C++ code, leading to the problem shown above where no compiler argument is given.
If upgrading is not an option and you need to deploy on a known machine, one work around is to manually setup for C++11 by making a more idiosyncratic Makevars file. For example, you could:
Remove the CXX_STD=CXX11 line from the Makevars file.
Remove SystemRequirements: C++11 from the DESCRIPTION file.
Add --std=c++11 and any other requirements needed to PKG_CPPFLAGS, PKG_CFLAGS, PKG_CXXFLAGS or whatever variable is being used to compile your code, to manually set the needed flags (assuming the machine's compiler actually does support C++11).
The above solution is not particularly robust, but can be used as a work around in case the machine cannot be upgraded.
Thanks to #DirkEddelbuettel for not only writing Rcpp but being willing to support it on StackOverflow and help with issues like this.
Am working a c++ project that uses the armadillo library to compute some linear algebra equations.
To do this, i have downloaded armadillo package and installed successfully and my code/project works good.But now, i want to remove the the installed library(armadillo) and i want to access from a folder that contains the full package of armadillo using a file path.
...is it possible to do so, please.(accessing using file path). if i am in a right way, can i have a simple illustration .
thank u for your time to help me.
Assuming that you have Linux or Mac OS X and a recent version of armadillo unpacked in /home/kahsay/, you can use the following command:
g++ myprog.cpp -o myprog -O2 -I /home/kahsay/armadillo-4.400.2/include -DARMA_USE_LAPACK -DARMA_USE_BLAS -DARMA_DONT_USE_WRAPPER -llapack -lblas
Under Mac OS X you may need to use -framework Accelerate instead of -llapack -lblas
You can tell the compiler where it should look for the Armadillo headers like g++ -I~/project/embedded_armadillo_headers .... To use Armadillo it suffices to provide the header files, you do not need to link against an Armadillo library itself, just make sure to link against BLAS and LAPACK.