Distribute compiled fortran library with module files - fortran

I have a Fortran library that uses a lot of modules. I use the ifort compiler on Windows. Therefore, I get a *.lib file for the library and *.mod files for the used modules.
This has the disadvantage that I also have to distribute the *.mod files, if I want to use the compiled library in another programme. How can this be prevented? I see two possibilities:
Create an interface, where functions are defined that are used to call the functions or procedures inside the library modules. So, I only have to provide the file, where the interface is defined.
Use the c-interface and export names for all module functions and procedures that should be used from outside the library using bind(c) in function definitions. Then I can distribute the library with a c-like header file.
Are there any other possibilities? What are best practices to distribute a compiled fortran library that uses modules?

I think that to distribute also the .mod file is the easiest, by far, if it supposed to by used called from Fortran. If it is to be called from other languages, you need the C interface anyway.
The bad thing is loosing the Fortran explicit interfaces. With option number 1 you can probably still have it, if you supply an include file with interface blocks, but just supplying the .mod file is better IMHO.

Related

What are all these *.cm[a-z] files and when we need them

OCaml have various extensions for compiled files: *.a, *.cma, *.cmi, *.cmx, *.cmxa, *.cmxs (and, perhaps, this is not exhaustive list). What are they and in which cases do I need them?
If I have a library, which files do I need to ship with it. I noticed some people blindly install all *.cm[a-z] files into the archive, but is it really required?
First I suggest you read the overview section of the byte code and native code compilers as this will greatly improve you understanding of what these files are.
Now more specifically if your library is a set of modules caracterized by a set of .mli/.ml files:
A cmi file hold the compiled interfaces of your module (result of compiling a .mli file). For each module of your library that you want other people to be able to use you need to install them (i.e. the cmi define your public interface). It's also a good practice to install the mli files so that people can have a peek at them. These days you should also install the cmti files (generated using -bin-annot option) which are annotated compiled interfaces and that can be used by tools like ocp-index, odoc and odig.
cma files hold an archive of the result of byte code comipilation (cmo files) of your library. You should install them if you want people to be able to compile with your library to byte code.
cmxa and .a files hold an archive of the result of native code compilation (cmx/o files) of your library. They are the pendent of cma files but for native code. You need to install them if you want people to be able to compile with your library to native code.
cmxs are the pendant of cmxa for native dynlinking. You need to install them if you want users of your library to be able to dynamically load your library in their programs as a plugin using the Dynlink module.
cmx files are in the cmxa however there is one reason why you may want to also install them. If they can be seen at seperate compilation time along with the cmi files they allow the compiler to perform cross-module inlining. The files separately compiled that way do however become dependent on that implementation which means that they will need a recompile if the cmx changes (i.e. the implementation) even if the cmi (i.e. the interface) did not.
Note that in general it's good if you are able to compile and install all of these files (though sometimes for some reasons you may want to not install the cmx files so that you can separately compile against a cmi and be able to switch implementation without a recompile) (see the -opaque compilation flag if you need this).
On final thing to note is that in OCaml there is no proper name spacing: every toplevel module is defined in a global namespace. This means that you need to be very careful in the toplevel module names you put in library, even if you don't export their cmi. Especially avoid generic terms that could be used by other libraries, use a short prefix for your library e.g. MyLib_file rather File (and again even if File turns out to be an internal module that you have in the cma but whose cmi you don't export it could clash with other private or public File modules defined in other libraries)
https://realworldocaml.org/v1/en/html/the-compiler-backend-byte-code-and-native-code.html is a good resource you can read for your question.
for a summary:

How can I use a library's header files to generate a libfoo.sym file for use with libtool -export-symbols?

I am building a shared library for the Debian GNU/Linux distribution and I am worried about the number of symbols from internal functions that it exports without any need. Since the library is built using autoconf/automake/libtool, the answer is easy: I can just add -export-symbols libfoo.sym to libfoo_la_LDFLAGS and specify only the symbols I want exported in that file.
But since this involves error-prone manual work, I figured that there has to be a better way. Is it possible to automate reading the (in this case) dozens of .h files that accompany the library and generate a first version of the libfoo.syms file.
Could I just use the C (or C++) compiler to do the busy work for me?
This is equivalent to extracting function prototypes and covered here:
Extracting C / C++ function prototypes
But since this involves error-prone manual work, I figured that there has to be a better way. Is it possible to automate reading the (in this case) dozens of .h files that accompany the library and generate a first version of the libfoo.syms file.
It might be more useful to use nm on the object files instead of trying to parse header files. nm can be told to restrict the output to just exports.
Could I just use the C (or C++) compiler to do the busy work for me?
Certain compilers have tools to assist with this, like gcc's visibility support.
But the real problem is you must know what functions must be exported and which must not.

How do I create a library?

Let's say I have 10 *.hpp and *.cpp files that I need to compile a code. I know that I will need those same files for many different codes. Can I create a "package" with those files that would allow me to simply write:
#include <mypackage>
instead of:
#include "file1.hpp"
#include "file2.hpp"
...
#include "file10.hpp"
I wouldn't then need to write a makefile every time I need this "package".
To be more precise, I use Linux.
A collection of CPP sources (H files and CPP files) can be compiled together in to a "library," which can then be used in other programs and libraries. The specifics of how to do this are platform- and toolchain-specific, so I leave it to you to discover the details. However, I'll provide a couple links that you can have a read of:
Creating a shared and static library with the gnu compiler [gcc]
Walkthrough: Creating and Using a Dynamic Link Library (C++)
Libraries can be seperated in to two types: source code libraries, and binary libraries. There can also be hybrids of these two types -- a library can be both a source and binary library. Source code libraries are simply that: a collection of code distributed as just source code; typically header files. Most of the Boost libraries are of this type. Binary libraries are compiled in to a package that is runtime-loadable by a client program.
Even in the case of binary libraries (and obviously in the case of source libraries), a header file (or multiple header files) must be provided to the user of the library. This tells the compiler of the client program what functions etc to look for in the library. What is often done by library writers is a single, master header file is composed with declarations of everything that is exported by the library, and the client will #include that header. Later, in the case of binary libraries, the client program will "link" to the library, and this resolves all the names mentioned in the header to executable addresses.
When composing the client-side header file, keep complexity in mind. There may be many cases where some of your clients only want to use some few parts of your library. If you compose one master header file that includes everything from your library, your clients compilation times will be needlessly increased.
A common way of dealing with this problem is to provide individual header files for correlated parts of your library. If you think of all of Boost a single library, then Boost is an example of this. Boost is an enormous library, but if all you want is the regex functionality, you can only #include the regex-related header(s) to get that functionality. You don't have to include all of Boost if all you want is the regex stuff.
Under both Windows and Linux, binary libraries can be further subdivided in to two types: dynamic and static. In the case of static libraries, the code of the library is actually "imported" (for lack of a better term) in to the executable of the client program. A static library is distributed by you, but this is only needed by the client during the compilation step. This is handy when you do not want to force your client to have to distribute additional files with their program. It also helps to avoid Dependancy Hell. A Dynamic library, on the other hand, is not "imported" in to the client program directly, buy dynamically loaded by the client program when it executes. This both reduces the size of the client program and potentially the disc footprint in cases where multiple programs use the same dynamic library, but the library binary must be distributed & installed with the client program.
On Linux:
g++ FLAGS -shared -Wl,-soname,libLIBNAME.so.1 -o libLIBNAME.VERSION OBJECT_FILES
where
FLAGS: typical flags (e.g., -g, -Wall, -Wextra, etc.)
LIBNAME: name of your library
OBJECT_FILES: objects files resulting from compiling cpp files
VERSION: version of your library
Assuming your "file1.hpp" and "file2.hpp" etc are closely related and (nearly) always used together, then making one "mypacakge.h" that contains the includes of the other components is a good idea (it doesn't in and of itself make it into a library - that is a different process altogether).
If they are NOT closely related and/or used together, then you shouldn't have such a "mega include", because it just drags in a bunch of things that aren't needed.
To make a library involves building your code once, and either generating a .lib file or a shared librar (.dll or .so file). The exact steps to do this depends on what system you are using, and it's a little too complicated for me to explain here.
Edit: To explain further: All of the C++ library is actually one library file or shared library file [along with a number of header files that contain some of the code and the declarations needed to use the code in the library]. But you include <iostream> and <vector> separately - it would become pretty awful to include EVERYTHING from all the different C++ library headers in one <allcpplibrary>, even if it was a lot less typing involved. It is split into sections that do one thing per headerfile. So you get a "complete" set from one header file, but not a too much other things you don't actually need.
Yes and no.
You can write an include-all header so that #include "myLib.h" is sufficient, because you include all those headers through the single header. However, that does not mean that the single include is enough to have the content of the 10 '.cpp' files linked to your project automagically. You will have to compile them into a library and link that single library (instead of all the object files) to the projects that use "myLib.h". Library binaries come as static and dynamic libraries, the files are typically named .lib and .dll (windows) and .a and .so (linux) for static and dynamic libraries, respectively.
How to build and link such libraries depends on your build system, you might want to loke those terms up on the net.
One alternative is to get rid of the .cpp files by defininig all the functions in the headers. That way you won't have to link the additional library, but it comes at the cost of increased build times, because the compiler will have to process all those functions every time you include the header directly or indirectly into one of your translation units.
If a client needs all ten headers to actually make use of your "package" (library), that's pretty bad interface design.
If a client needs only some headers, depending on which parts of your library are being used, let the client include the appropriate headers, so only a minimal set of identifiers are introduced. This helps scope, modularization, and compilation times.
If all else fails, you can make an "interface header" for external use, which is different from the ones you use internally for actually compiling your library. This would be the one that gets installed, and consists of the necessary contents from the other headers. (I still don't think you would need everything from every header in your lib.)
I would discourage Salgar's solution. You either have individual headers, or a monolithic one. Providing individual headers plus a central one that simply includes the others strikes me as pretty poor layout.
What I do not understand is inhowfar Makefiles play into this. Header dependencies should be resolved automatically by your Makefile / build system, i.e. it shouldn't matter here how your header files are layed out.
simply all you'd have to do is create a .h or .hpp file that has
#ifndef MAIN_LIB_H
#define MAIN_LIB_H
#include "file1.hpp"
#include "file2.hpp"
#include "file3.hpp"
...
#include "file10.hpp"
#endif
make the file called whatever I would choose main_lib.h because of the ifndef, and just
#include "DIRECTORY PATH IF THERE IS ONE/main_lib.h"
in the main file. No need for anything else if you were using visual studios. Just build then press CTRL + F5.

Using FDLIBM library in Visual Studio, C++

I'm porting some code from MATLAB to C++ and discovered that MATLAB's sin() and cos() functions produce slightly different results from the sin() and cos() functions in the C++ library. To eliminate these differences, I would like my C++ code to call the sin() and cos() functions from the fdlibm 5.3 library, which is what I think MATLAB uses for sin() and cos() operations.
However, I have been having some difficulty using the fdlibm library. I am using Visual Studio 2010, and downloaded the fdlibm header file and source codes from http://www.validlab.com/software/, but am not sure the best way to use these files. Do I need to first build the files into a static or dynamic library, and then link it to my code? Also, how do I specify that I want to use the sin() from fdlibm, rather than from C++ library? Do I need to modify the fdlibm source code so that the sin() and cos() functions are within a namespace?
Any guidance is greatly appreciated.
Essentially, you have two tasks to complete:
You must compile the fdlibm source to produce an object module suitable for your purpose.
You must link the object module with your other object modules.
I see two issues with the first task. One, sources from projects like fdlibm are typically written to be portable to many systems and may involve a fair amount of work to configure. Rather than being very simple C or C++ code, they may use a number of preprocessor conditionals to select certain options, and the package the sources come in may have scripts to make various preparations for compiling.
Two, you want the sources to match the C++ standard’s specification for declaring sin and cos. If the fdlibm package you have supports C++, this might not require any work on your part. Otherwise, you may have to modify the sources to wrap the sin and cos definitions inside the std namespace, or otherwise modify the sources.
The second issue is linking. Using a library is not required. You can simply compile the source file(s) containing sin and cos to produce an object module (or modules), then link that object module (or modules) with your other object modules. If you wish, you can instead create a library, put the object module(s) with sin and cos into the library, and link the library with your object modules. With most common linkers, you can link a library with your object modules simply by listing it as input the linker, the same way object modules are listed. (Some linkers also have other options for referring to libraries, but simply giving its normal file path is usually sufficient.) You can create and link either a static or a dynamic library, as you prefer. If you use a dynamic library, it must be present when the executable runs. For a simple application for your own use, there is no need to use a dynamic library (or even to use a static library; object modules are fine). (Essentially, the purpose of libraries is to make distributing object modules to other people easier, or to organize large projects. Simple applications do not need libraries.)
Another note about linking: When you supply your own sin and cos, the linker has two implementations to choose from: Your implementations of sin and cos and the standard library implementations of sin and cos. Usually, standard libraries are linked in after any user-specified files, so merely specifying your object module or library will suffice to ensure your sin and cos are used, not the library’s sin and cos. In the event this is not the case, there should be linker options to change the order in which libraries are considered.

What's the relationship between header files and library files in c++?

Why do we need to add both includes and libs to the compilation?
Why doesn't the libs wrap everything in it?
Header files define interfaces; libraries provide implementations.
The header for a library is going to tell your compiler the names and signatures of functions provided by the library, the names of variables provided by the library, and the layout of classes provided by the library.
The library itself is compiled code which is executed at run time. Using the header during compilation allows your compiler to generate compiled code which knows how to invoke and communicate with the existing library code.
A header file (usually) only contains declarations for classes and functions. The actual implementations are built from CPP files. You can then link against those implementations with only the header declarations available.
I'm guessing this is your way of handling the question you asked at How to make #include <mysql.h> work?
Unfortunately, I think the better solution is to either learn more about C++, or learn more about Google, before posting absolutely everything to this site.