The fortran compiler catches a lot of my stupid mistakes like providing the wrong number of arguments, or some argument that has the wrong type.
However, I use LAPACK and BLAS a lot, and when I make those mistakes when calling their subroutines, gfortran happily compiles it for me, resulting in errors that are hard to debug.
Is there a way to have gfortran check the calls to those shared libraries for me?
I compile using the following commands
gfortran -o foo.o -c foo.f
gfortran -o bar bar.f foo.o -lblas -llapack
Related
I am currently trying to implement some 3rd party code (program A) within another 3rd party program (program B). Unfortunately, it seems like some COMMON blocks and subroutines share names between the two codes. This is not detected by the compiler (I suspect because the compilation process involves many different files and creating a shared object), but the program crashes when accessing certain common blocks / subroutines with very general names (e.g. BASIS, JACOBIAN), and renaming them alleviates the problem. However, renaming all common blocks and subroutines within program A is not feasible because of its size.
At the moment, I have two seperate directories of code. I compile both seperately with the intel compiler into .o files and then create a shared object from both:
ifort -c -fPIC -fp-model precise codeA.f
ifort -c -fPIC -fp-model precise codeB.f
ifort -c -fPIC -fp-model precise code_coupling.F90
ld -shared -o library.so codeA.o codeB.o code_coupling.o
The code in code_coupling.F90 is for coupling both codes and it is called within codeB.f, which I cannot change.
Is there a possibility to compile codeA.f with some additional compiler flags so that the names of the COMMON blocks and subroutines don't interfere with each other?
Is there some other way I can prevent the names from interfering with each other?
One (a little bit hacky) solution I have discovered is to compile codeA.f with the flag -assume nounderscore, and rename the functions that need to be called in code_coupling.F90 manually with a trailing underscore:
ifort -c -fPIC -fp-model precise -assume nounderscore codeA.f
ifort -c -fPIC -fp-model precise codeB.f
ifort -c -fPIC -fp-model precise code_coupling.F90
ld -shared -o library.so codeA.o codeB.o code_coupling.o
Rename the subroutine codeA_subroutine within codeA.f to codeA_subroutine_.
I write a new "sqrt(double)" in libtest.cpp, and compile it to a dynamic (or static) library libtest.so. The function interface is same as in math.h.
Then I want to link main.o with libtest.so instead of libm.a. I am wondering how to do it with clang++ or g++.
It seems that -lstdc++ calls -lm by default.
Thanks a lot !
I have been writing a c++ program in Ubuntu and window8 using armadillo. Under Windows8 the program compiles without problems.
The program is just using the linear systems solver.
Under Ubuntu the compiler says
"reference to `wrapper_dgels_' not defined"
The compiler line I use is:
mpic++ -O2 -std=c++11 -Wall -fexceptions -O2 -larmadillo -llapack -lblas program.o
However, right before the error I see:
g++ module_of_the_error.o
Which is something I haven't set.
I am using code blocks in Ubuntu, and I compiled armadillo with all the libraries that cmake asked. (BLAS< LAPACK, OpenBLAS, HDF5, ARPACK, etc)
I have no clue what might be causing the problem, since the exact same code compiles in visual studio.I have tried the compiler line modifications suggested but it does not seem to work.
Any help is appreciated.
This is one trap I fell into myself one time. You will not like the likely cause of your error.
The order of the arguments to the linker matters.
Instead of
mpic++ -O2 -std=c++11 -Wall -fexceptions -O2 -larmadillo -llapack -lblas program.o
try:
mpic++ -O2 -std=c++11 -Wall -fexceptions -O2 program.o -larmadillo -llapack -lblas
I.e., put the object files to be linked into the executable before the libraries.
By the way, at this stage you are only linking files that have already been compiled. It is not necessary to repeat command line options that are only relevant for compiling. So this will be equivalent:
mpic++ program.o -larmadillo -llapack -lblas
Moreover, depending on how you installed Armadillo, you are adding either one or two superfluous libraries in that line. One of the following should be enough:
mpic++ program.o -larmadillo
or
mpic++ program.o -llapack -lblas
EDIT: as the answer by rerx states, the problem is probably just a simple ordering of the switches/arguments supplied to g++. All the -l switches need to be after the -o switch. Or in other words, put the -o switch before any -l switches. For example:
g++ prog.cpp -o prog -O3 -larmadillo
original answer:
Looks like your compiler can't find the Armadillo run-time library. The proper solution is to specify the path for armadillo run-time library using the -L switch. For example, g++ -O2 blah.cpp -o blah -L /usr/local/lib/ -larmadillo
Another possible solution is to define ARMA_DONT_USE_WRAPPER before including the armadillo header, and then directly link with LAPACK and BLAS. For example:
#define ARMA_DONT_USE_WRAPPER
#include <armadillo>
More details are available at the Armadillo frequently asked questions page.
I'm learning how to make C++ call haskell code from a library, I was following instructions from FFI complete examples http://www.haskell.org/haskellwiki/FFI_complete_examples
however, after
ghc -v Foo.hs
only Foo_stub.h and Foo.o are created, there's no Foo_stub.c or Foo_stub.o. According to Calling Haskell from C, ghc > 7.2 doesn't generate _stub.o anymore.
In this case do I still need a stub.o to link an executable using g++?
At the moment after
g++ -o test Foo.o test.o `cat link_options`
I get lots of undefined symbols errors for hs_init and the like. Is it because _stub.o is not present or something else missing?
I can link them correctly using ghc:
ghc -no-hs-main -o test test.o Foo.o -lstdc++
(after reading this question:Building a dynamic library with haskell and using it from C++)
but I wonder is it still possible to link using g++?
You're looking at an out-of-date example (it's using ghc 6.12.3). This example works for 7.6.3:
http://www.haskell.org/haskellwiki/GHC/Using_the_FFI
I have a file main.cpp containing an implementation of int main() and a library foo split up between foo.h and foo.cpp.
What is the difference (if any) between
g++ main.cpp foo.cpp -o main
and
g++ -c foo.cpp -o foo.o && g++ main.cpp foo.o
?
Edit: of course there is a third version:
g++ -c foo.cpp -o foo.o && g++ -c main.cpp -o main.o && g++ main.o foo.o -o main
The total work that the compiler & linker (and other tools used by the compiler) has to do is exactly the same (give or take a few minor things like deleting the temporary object file created for foo.o and main.o that the compiler makes in the first example, which remains in the second example, and both remain in the third example).
The main difference comes when you have a larger project, and you use a Makefile to build the code. Here the advantage is that, since the Makefile only recompiles things that need to be recompiled, you don't have to wait for the compiler to compile code that don't need to recompile. Assuming of course, we choose to use the g++ -c file.cpp -o file.o variant in the makefile (which is the typical way to do it), and not the g++ file.cpp main.cpp ... -o main.
Of course, there are other possible scenarios - for example in unit testing, you may want to use the same object file to build a test around, as you were using to build the main application. Again, this makes more of a difference when the project is large and has half a dozen or more source files.
On a modern machine, compiling doesn't take that long - my compiler project (~5500 lines of C++ code) that links with LLVM takes about 10 seconds to compile the source files, and another 10 seconds to link with all the LLVM files. That's a debug version of the llvm libraries, so it produces a 120+ MB executable.
Once you get onto commercial (or corresponding open source type projects) level of software, the number of sourcefiles and other things involved in a project can be hundreds, and the number of lines of the sources can often be in the 100k-several million lines range. And now it starts to matter if you just recompile foo.cpp or "everything", because compiling everything takes an hour of CPU time. Sure, with multicore machines, it still is only a few minutes, but it's not ideal to spend minutes, when you could just spend a few seconds.
If you type something like this:
g++ -o main main.cpp foo.cpp
You are compiling and linking two cpp files at once and generating an executable file called main (you get it with -o)
If you type this:
g++ main.cpp foo.cpp
You are compiling and linking two cpp files at once, generating an executable file with the default name a.out.
Finally, if you type this:
g++ -c foo.cpp
You will generate an object file called foo.o which can later be linked with g++ -o executable_name file1.o ... fileN.o
Using options -c and -o allows you to perform separately two of the tasks performed by the g++ compiler and getting the corresponding preprocessed and object files respectively. I have found a link which may provide you helpful information about it. It talks about gcc (C compiler), but both g++ and gcc work similarly after all:
http://www3.ntu.edu.sg/home/ehchua/programming/cpp/gcc_make.html
Be careful with the syntax of the commands you are using. If you work with Linux and you have problems with commands, just open a cmd window and type "man name_of_the_command", in order to read about, syntax, options, return values and some more relevant information about commands, system calls, user library functions and many other issues.
Hope it helps!