units of measure in fortran - fortran

Is there a library defining a datatype and services to deal with quantities associated to a unit of measure in fortran ?

There is the PHYSUNITS F90 module, which might do what you want.

Do mean something like UDUNITS? I find it really useful for time calculations, but most other conversions are just simple multiplication/addition combinations. These are usually too easy to code manually to warrant the extra library dependency of UDUNITS. Note that the more recent version, UDUNITS-2, does not yet have a Fortran interface.

Related

How to switch from mpif.h to mpi_f08 in fortran while maintaining compatibility?

I am working on a numerical solver written in Fortran which uses MPI for parallelization on large clusters (up to about 500 processes). Currently we are including mpi via
#include "mpif.h"
which, from my understanding, is deprecated and strongly discouraged. In an effort to modernize and clean up our mpi communications, we would like to switch to using the more modern mpi_f08 module. The issue we are facing is that we need to maintain the possibility of compiling a version based on the old mpi header in order to not break the coupling with another solver. I'd much appreciate some advice on how to elegantly maintain this compatibility.
Question #1: What would be an elegant way to either include the header or use the module depending on a preprocessor flag without having #ifdef statements scattered throughout the code?
My thought so far would to define a module
module mpi_module
#ifdef MPI_LEGACY
#include "mpif.h"
#else
use mpi_f08
#endif
end module
and use this module everywhere where the mpi header file is currently included. Is this a viable approach or would this have any unwanted effects which I'm currently overlooking?
Question #2: What would be an elegant way to switch between integers and the new derived types from mpi_f08 depending on the preprocessor flag? (Again, without scattering #ifdef statements throughout the code)
My initial thought on this would be to use something like
#ifdef MPI_LEGACY
#define _mpiOp_ integer
#else
#define _mpiOp_ type(MPI_Op)
#endif
so that I can simply replace
integer :: OP
by
_mpiOp_ :: OP
to obtain compatibility with both ways of including MPI. I'm also not quite happy with this solution yet, since, in my understanding, you can not put these kinds of preprocessor definitions into a module. Thus, you'd end up with a module plus a header file which you necessarily have to remember to include together each time. Again, I'm grateful for any potential flaws with this approach and any alternatives that you can point out.
Sorry for the long post, but I wanted to make my thoughts as clear as possible. I'm looking forward to your input!
The old and the new way are way too different. Not only you have a use statement instead of an include statement and a derived instead of an integer for an Op. Many routines will have different signatures and use different types.
So I am afraid the answer is that there is no elegant way. You are making a conglomerate of two things that are way too different to be elegantly combined.
As has been mentioned in the comments, the first step to get more modern is to do use mpi instead of include "mpif.h". This already enables the compiler to catch many kinds of bugs when the routines are called incorrectly. Tje extent, to which these checks will be possible, will depend on the details of the MPI library configuration. Namely, the extent of generic interfaces generated instead of just external statements.
If you have to combine your code with another code that uses the old way, it makes good sense to first do use mpi, see how it goes, and think whether it makes sense to go further.

SML more or less large systems: compilers and interpreters interoperability

This is about programming in the large with SML. First a summary of what's seems to be available for that purpose, then a tiny summary, then finally, the simple question.
The use pseudo‑clause
Top-level type, exception, and value identifiers (standardml.org)
Note that the use function is special. Although
not defined precisely, its intended purpose is
to take the pathname of a file and treat the
contents of the file as SML source code typed
in by the user. It can be used as a simple build
mechanism, especially for interactive sessions.
Most implementations will provide a more sophisticated
build mechanism for larger collections of source
files. Implementations are not required to supply
a use function.
Then later
val use : string -> unit (* implementation dependent *)
Its drawbacks are: not supported by MLton at least, and while not standardized, seems to have the same behaviour with all major SML systems, which is to reload a unit as many times as a use is encountered for it, which is not OK due to the generative semantic of SML (defining a structure multiple times, will result into as much different definitions, which is especially wrong with types definitions).
ML Basis Files
There exist so called “ML Basis Files”: MLBasis (mlton.org) and ML‑Kit ML Basis Files (sourceforge.net).
The load pseudo‑clause
MoscowML has load which acts like use which uses only once, i.e. does not reload a unit if it's already loaded, which is what's expected to compose a system.
Summary
load is nice, but only recognized by MoscowML
MLBasis Files may be nice, but it's not recognized by neither Poly/ML nor Moscow ML
MLton does not recognize use
Putting everything in a single big bundle file, is the only one interoperable thing working with all compilers and interpreters; that works, but that quickly become a burden.
The question
Is there a known interoperable way to compose a system made of multiple SML source files?
One system you did not mention is SML/NJ's Compilation Manager (CM), which is quite powerful. And there are a few other, less known systems.
But that notwithstanding, the situation is indeed dire. There simply is no standardised separate compilation mechanism for SML. In practice that means that writing portable Makefiles or something alike is rather painful.
For HaMLet I went through that pain, in order to make it compile with 7 different SML implementations. The approach is to use a restricted (dependency-ordered) CM file and the necessary amount of make + sed hackery to generate meta files for other systems from that. It can also generate a file containing respective 'use' invocations for all the sources, for all other systems that at least support that. All in all it's not pretty, but works sufficiently well.

How to get subroutine calling hierarchy in Fortran?

In a subroutine, I would like to know which upper subroutine is calling it when error occurs. Is there any way without using arguments? So users of the subroutine could be notified the upper subroutine.
There is nothing built in to Fortran which will give you the sort of information you seek. You could, as you suggest, write your own programs to report the information but it strikes me that doing so might burden your code with a lot of error-reporting infrastructure which obscures its meaning and materially affects its importance.
I suggest that you investigate your compiler's capabilities. Intel Fortran, for example, offers a traceback option which is often useful for diagnosing the causes of problems. Start your reading here. All the other Fortran compilers I've worked with offer similar facilities, check the documentation.

Call C/C++ code from a Fortran 77 code

I'm trying to make a Fortran 77 wrapper for C++ code. I have not found information about it.
The idea is to use from functions from a lib that is written in C++ in a Fortran 77 progran.
Does anyone know how to do it?
Thanks!
Lawrence Livermore National Laboratory developed a tool called Babel for integrating software written in multiple languages into a single, cohesive application. If your needs are simple you can probably just put C wrapper on your C++ code and call that from Fortran. However, if your needs are more advanced, it might be worth giving Babel a look.
Calling Fortran from C is easy, C from Fortran potentially tricky, C++ from Fortran may potentially become ... challenging.
I have some notes elsewhere. Those are quite old, but nothing changes very rapidly in this sort of area, so there may still be some useful pointers there.
Unfortunately, there's no really standard way of doing this, and different compilers may do it slightly different ways. Having said that, it's only when passing strings that you're likely to run into major headaches. The resource above points to a library called CNF which aims to help here, mostly by providing C macros to sugar the bookkeeping.
The short version, however is this:
Floats and integers are generally easy -- an integer is an integer, more or less.
Strings are hard (because Fortrans quite often store these as structures, and very rarely as C-style null-terminated arrays).
C is call-by-value, Fortran call-by-reference, which means that Fortran functions are always pointer-to-value, from C's point of view.
You have to care about how your compiler generates symbols: compilers often turn C/Fortran symbol foo into _foo or foo_ or some other variant (see the compiler docs).
C tends not to have much of a runtime, C++ and Fortran do, and so you have to remember to link that in somehow, at link time.
That's the majority of what you need to know. The rest is annoying detail, and making friends with your compiler and linker docs. You'll end up knowing more about linkers than you probably wanted to.

Tool for analyzing C++ sources (MSVC)

I need a tool which analyzes C++ sources and says what code isn't used. Size of sources is ~500mb
PC-Lint is good. If it needs to be free/open source your choices dwindle. Cppcheck is free, and will check for unused private functions. I don't think that it looks for things like uninstantiated classes like PC-Lint.
Once again, I'll throw AQTime into the discussion. Has static code analysis for most, if not all, of the supported languages. I didn't really go into that part though, I mainly used the dynamic profilers (memory, performance and so on).
You could use a code coverage tool (dynamic analysis) to get an idea of what code isn't
being executed, and then hand analyze to see if that code is really useless.
If you want a static analysis, you need a tool that can read the entire
500Mb of source code (est. 20 million lines? Wow!) and compute a
conservative estimate of what is used. This requires doing a points-to
analysis over the entire system.
Here's why: If you leave out any module Z, and
decide that FOO is unused, you
might find out later that Z happened to be the one that used FOO,
or more subtly, Z copied a pointer value that happened to have
&FOO in it to a third module M that in turn called the "unused" function
throught the pointer.
What this means is that no static analysis tool that reads just
single modules (compilation units) can answer this question safely.
And at your scale, you can't afford to make dumb mistakes.
My company, Semantic Designs has done points-to analysis for 35 million line systems
of C code using our DMS Software Reengineering Toolkit. DMS
can read very large systems of source code. It required
a custom tool, not so much because the source code was in an odd (archiac)
dialect of C++ (systems in extremely modern dialects can't be this big,
not enough time to code them!), but rather because in very large systems
there are other peculiar factors at play. For the C system we did,
there was a custom dynamic linker, and that affected the points-to analysis,
which in turn had to be customized.
Because systems of the scale you are discussing alway have surprises like this (BIBSEH: "Because In Big Systems, Everything Happens"), you will
likely need a custom tool to answer the question. DMS is designed
to be customized.
See http://www.semanticdesigns.com/Products/DMS/DMSToolkit.html
and http://www.semanticdesigns.com/Products/FrontEnds/CppFrontEnd.html
Code coverage tool is what you need, but you will have to run our program through all functionality and see what is repoted as unused. Since the code could be DLL exported functions you will have to make sure nothing uses them externally. Some code coverage tools: Purify, CTC++, Boundschecker may have code coverage functionality if I remember right and a bunch of other tools.
Be very careful about removing any function that may have been exported without knowing what external program may be linking/using it.