I'm porting some code from MATLAB to C++ and discovered that MATLAB's sin() and cos() functions produce slightly different results from the sin() and cos() functions in the C++ library. To eliminate these differences, I would like my C++ code to call the sin() and cos() functions from the fdlibm 5.3 library, which is what I think MATLAB uses for sin() and cos() operations.
However, I have been having some difficulty using the fdlibm library. I am using Visual Studio 2010, and downloaded the fdlibm header file and source codes from http://www.validlab.com/software/, but am not sure the best way to use these files. Do I need to first build the files into a static or dynamic library, and then link it to my code? Also, how do I specify that I want to use the sin() from fdlibm, rather than from C++ library? Do I need to modify the fdlibm source code so that the sin() and cos() functions are within a namespace?
Any guidance is greatly appreciated.
Essentially, you have two tasks to complete:
You must compile the fdlibm source to produce an object module suitable for your purpose.
You must link the object module with your other object modules.
I see two issues with the first task. One, sources from projects like fdlibm are typically written to be portable to many systems and may involve a fair amount of work to configure. Rather than being very simple C or C++ code, they may use a number of preprocessor conditionals to select certain options, and the package the sources come in may have scripts to make various preparations for compiling.
Two, you want the sources to match the C++ standard’s specification for declaring sin and cos. If the fdlibm package you have supports C++, this might not require any work on your part. Otherwise, you may have to modify the sources to wrap the sin and cos definitions inside the std namespace, or otherwise modify the sources.
The second issue is linking. Using a library is not required. You can simply compile the source file(s) containing sin and cos to produce an object module (or modules), then link that object module (or modules) with your other object modules. If you wish, you can instead create a library, put the object module(s) with sin and cos into the library, and link the library with your object modules. With most common linkers, you can link a library with your object modules simply by listing it as input the linker, the same way object modules are listed. (Some linkers also have other options for referring to libraries, but simply giving its normal file path is usually sufficient.) You can create and link either a static or a dynamic library, as you prefer. If you use a dynamic library, it must be present when the executable runs. For a simple application for your own use, there is no need to use a dynamic library (or even to use a static library; object modules are fine). (Essentially, the purpose of libraries is to make distributing object modules to other people easier, or to organize large projects. Simple applications do not need libraries.)
Another note about linking: When you supply your own sin and cos, the linker has two implementations to choose from: Your implementations of sin and cos and the standard library implementations of sin and cos. Usually, standard libraries are linked in after any user-specified files, so merely specifying your object module or library will suffice to ensure your sin and cos are used, not the library’s sin and cos. In the event this is not the case, there should be linker options to change the order in which libraries are considered.
Related
I'm developing a large software package consisting of many packages which are compiled to shared objects. For performance reasons, I want to compile Eigen 3 (a header-only library) with vector instructions, but the templated methods are being compiled all over the place. How can I ensure that the Eigen functions are compiled into a specific object file?
This software consists of ~2000 individual packages. To keep development going at a reasonable pace, the recommended way of compiling the program is to sparsely check out some of the packages and compile them, after which the program can be executed using precompiled (by some CI system) shared libraries.
The problem is that part of my responsibility is to optimise the CPU time of the program. In order to do so, I wanted to compile the package I am working on (let's call it A.so) with the -march flag so Eigen can exploit modern SIMD processor extensions.
Unfortunately, because Eigen is a header-only library, the Eigen functions are compiled into many different shared objects. For example, one of the most CPU intensive methods called in A.so is the matrix multiplaction kernel which is compiled in B.so. Many other Eigen functions are compiled into C.so, D.so, etc. Since these objects are compiled for older, more widely implemented instruction set extensions, they are not compiled with AVX, AVX2, etc.
Of course, one possible solution is to include packages B, C, D, etc. into my own sparse compilation but this negates the advantage of compiling only a part of the project. In addition, it leaves me including ever more and more packages if I really want to vectorise all linear algebra operations in the code of package A.
What I am looking for is a way to compile all the Eigen functions that package A uses into A.so, as if the Eigen functions were defined with the static keyword. Is this possible? Is there some mechanism in the compiler/linker that I can leverage to make this happen?
One obvious solution is to hide these symbols. This happens (if I understand the problem properly) because these functions are exported and can be used by other subsequently loaded libraries.
When you build your library and link against the other libraries, the linker reuses what it can. And the old packages as well. I hope you don't require these libraries for your own build?
So two options:
Force the loading of A before the other libraries (but if you need the other libraries, I don't think this is doable),
Tell the linker that these functions should not be visible by other libraries (visibility=hidden by default).
I saw something similar happening with a badly compiled 3rd-party library. It was built in debug mode, shipped in the product, and all of a sudden one of our libraries experienced a slow down. The map files identified where the culprit debug function came from, as it exported all its symbols by default.
An alternative way to change visibility without modifying the code is to filter symbols during linking stage using version scripts -> https://sourceware.org/binutils/docs/ld/VERSION.html. You'll need something like
{
global: *;
local:
extern "C++"
{
Eigen::*;
*Eigen::internal::*;
};
};
I'am new to c++ programming and I'm a little confused about how the compiler includes standard libraries in c++ program. Say for example I want to use the sqrt() function. I know that I have to include the math.h header file in my source code, but the math library contains many functions other than sqrt(). So my question is are all this functions source code added to the program, whitch is unnecessary, or just the function that I need?
I hope my question was clear and thanks in advance.
Functions that are NOT templates (and not so trivial that they are just one or two lines) are compiled separately, and then stored in a "libary" (which is not the header file, it just contains double sqrt(double); or some such).
The compiler will (given the right compile-time flags) link to the C library that contains those functions. The linker [called upon by the compiler] will then introduce the code that was compiled when the library was built. So, typically, the source is not compiled when you build your program - it was done some other time.
The linker understands what functions are needed by the code you are building, so will only add those functions to your program, not ALL of the functions [but it may pull in some other functions than the precise one that you asked for, for example there may be some helper functions and perhaps some generic error handling functions that are needed by sqrt].
No, linking means that the linker figures out which symbols (functions and data objects) from your library are necessary to build your program, and then only includes these that are.
In fact, with dynamic linking, it wouldn't even include the function itself, but just the reference to the function and how to load the library containing it.
Generally, libraries that are linked with your executables aren't source code, but binary objects, which already have been translated to machine language ("compiled").
You have a confusion between libraries and header files. Libraries are the implementations. Header files contain the declarations.
You use #include for a library file so that the compiler can find the syntax and semantics of the function you use.
All the declarations (unless blocked by preprocessor directives), are parsed by the compiler and stored in a dictionary. The only issue about you not using a declaration is that it takes up room in the compiler's dictionary. Usually this is not an issue (modern compilers have large capacity dictionaries).
As far as adding functions to your program, that is handled during the Linking phase (usually by a linker application). This is compiler dependent. Fundamentally, only functions that are used by your program are pulled from the library (static libraries only) and placed into your executable. Some compiler may speed up the build process and include groups of functions that are popular, but you may not use. This speeds up the build processor but makes your executables larger.
Some library functions may use other library functions. This means that a library function may add a lot more code into your executable. One example is printf. The printf function requires a lot of support, more than puts, because of all the formatting specifiers. So the printf may include other (internal) library functions.
I have been hearing about build dependency / runtime dependency. They are quite self explanatory terms. As far as I understand, build dependency is used for components required in the compile time. For example if A has a build dependency to B, A cannot be built without B. Runtime dependency on the other hand is dynamic. If A has a runtime dependency to B, A can be built without B but cannot run without B.
This information however is too shallow. I'd like to read and understand these concepts better. I have been googling but could not find a source, can you please provide me a link or right keywords to search?
I'll try to keep it simple and theoretical only.
When you write code that calls function "func", compiler needs your function descriptor (e.g. "int func(char c);" usually available in .h files) to verify arguments correctness and linker needs your function implementation (where your actual code reside).
Operating systems provide mechanism to separate functions implementation into different compiled modules. It is usually required for
Better code reuse (multiple applications can use the same code, with different data context)
More efficient compilation (you don't need to recompile all dependency libraries)
Partial upgrades
Distribution of compiled libraries, without disclosing the source code
To support such functionality compiler is provided with function descriptors (.h files) as usual. While Linker is provided with lib files containing function stubs. Operating system is responsible to load an actual implementation file during application loading procedure (if it is not yet loaded for different application) and to map actual functions into memory of the new application.
Dynamic load functionality is extended for object oriented languages as well (C++, C#, Java and etc.)
Practical implementations are OS dependent - dynamic linking is implemented as DLL files in Windows or as SO files in Linux
Special OS dependent techniques can be used to share context (variables, objects) between different applications that uses the same dynamic library.
Meir Tseitlin
I have a Fortran library that uses a lot of modules. I use the ifort compiler on Windows. Therefore, I get a *.lib file for the library and *.mod files for the used modules.
This has the disadvantage that I also have to distribute the *.mod files, if I want to use the compiled library in another programme. How can this be prevented? I see two possibilities:
Create an interface, where functions are defined that are used to call the functions or procedures inside the library modules. So, I only have to provide the file, where the interface is defined.
Use the c-interface and export names for all module functions and procedures that should be used from outside the library using bind(c) in function definitions. Then I can distribute the library with a c-like header file.
Are there any other possibilities? What are best practices to distribute a compiled fortran library that uses modules?
I think that to distribute also the .mod file is the easiest, by far, if it supposed to by used called from Fortran. If it is to be called from other languages, you need the C interface anyway.
The bad thing is loosing the Fortran explicit interfaces. With option number 1 you can probably still have it, if you supply an include file with interface blocks, but just supplying the .mod file is better IMHO.
I've developed a module written in C++ that manages the licenses for my company's product. To prevent DLL replacement, it is our goal to statically link the DLL in the solution. This is very easy to do in C++ but proving to be a bit problematic for part of our codebase that is written in Fortran.
I realize that this could possibly vary from compiler to compiler (We use Intel Fortran 9.1), but is there any universal way to implement static linking of a C++ DLL within Fortran?
To get static linking, the usual way is not to use DLL but simple libraries instead (*.lib). This has nothing to do with programming languages : it just depends on the operating system.
Building a library is also simpler than building a DLL. On Unix, a library has the suffix .a whereas a DLL has a suffix .so (for shared object).
Nevertheless, it is often possible to link a DLL statically but this is obtained by a specific option passed to the linker. For instance on Unix, with many compiler suites, the option is either -static or -Bstatic. Look at the keyword "static" in your programming manual of your compilers.
If you have access to the source, just compile it to object files and link them into your Fortran project. ISO_C_BINDING should work on many compilers.