In older fortran code, when .or. is used with two integer types, is the result a bit-wise or of the operands or 0/1?
I'm updating legacy code, and believe I should be replacing these instances of .or. with IOR, but am uncertain if that was the expected result in older code. Should I instead be setting the result to either 0 or 1?
I believe what you are seeing is indeed a custom extension. I haven't seen this one in use before, but I did find a reference on the web about such things actually existing in the wild:
When Fortran programs communicate directly with digital hardware it may be necessary to carry out bit-wise logical operations on bit-patterns. Standard Fortran does not provide any direct way of doing this, since logical variables essentially only store one bit of information and integer variables can only be used for arithmetic. Many systems provide, as an extension, intrinsic functions to perform bit-wise operations on integers. The function names vary: typically they are IAND, IOR, ISHIFT. A few systems provide allow the normal logical operators such as .AND. and .OR. to be used with integer arguments: this is a much more radical extension and much less satisfactory, not only because it reduces portability, but also reduces the ability of the compiler to detect errors in normal arithmetic expressions.
Reference
Compilers with DEC/VMS links or heritage support the extension of allowing integer arguments to .OR. (and other logical operators). That group of compilers define the .OR. operation on integers as being bit wise.
A currently supported compiler with that heritage is Intel Fortran (via Compaq Fortran, via Digital Fortran, etc).
Related
I have a code that I run on several different clusters, which all have different combinations of MPI & LAPACK.
This can cause problems. For example I currently use ifort's "-i8" option, which works fine with LAPACK, but now all MPI calls are broken, because it expects integer(4), rather than integer(8).
Is there an elegant & flexible way to adapt the integer type based on the local MPI & LAPACK installation?
Hard coding the types for every specific call seems is just very cumbersome and inflexible.
MPI calls do not expect INTEGER(4) nor INTEGER(8), they expect just INTEGER. And, as always, remember what those (4) and (8) actually mean Fortran: integer*4 vs integer(4) vs integer(kind=4)
With -i8 you are changing what INTEGER means, to which kind it corresponds. You can do that, but you have to compile the MPI library with the same settings. The library may or may not be prepared to be compiled that way, but theoretically it should be possible.
You could also try passing integer(int32) instead of integer to MPI. If it is the correct kind which correspond to the default kind of the MPI library, the TKR checks and all other checks should pass OK. But it is not recommended.
To stay strictly within the Fortran standard, when you promote the default integer kind, you should also promote the default real and logical kind.
To stay portable use integers that correspond to the API of the library you use and make sure the library is meant to be used with that particular compiler and with that particular compiler configuration.
Usually, for portability one should not rely on promoting the default kinds but one should use specific kinds which suit the given purpose in the specific part of the code.
Do Fortran kind parameter for the same precision change depending on the processor even with the same compiler? I have already read the post here.
The thing I struggle is if we are using the same compiler, say gfortran, why would there be a different set of kind parameter for the same precision? I mean, the compiler's specification is the same, so should't compiler always give us the same precision for a particular kind parameter no matter what operating system or processor I am using?
EDIT: I read some where that for integers, different CPUs support different integral data types, which means some processors might not directly support certain precision of an integer. I also read that programming language like Fortran opt for optimization so the language is implemented in a way that avoid strange precision that are not directly supported by the hardware. Does this has to do with my concern?
You are asking "do they change". The answer is "they may".
The meaning of a certain kind value for a certain type is Fortran processor (the language concept - which is not the same thing as a a microprocessor) dependent.
The concept of a Fortran processor covers the entire system that is responsible for processing and executing Fortran source - the hardware, operating system, compiler, libraries, perhaps even the human operator - all of it. Change any part of that system, and you can have a different Fortran processor.
Consequently there is no requirement that the interpretation of a particular kind value for a particular type be the same for the same compiler given variations in compiler options or hardware in use.
If you want your code to be portable, then don't make the code depend on particular kind values.
As we know that C is a compiled language. According to C language Wikipedia it says that:
It was designed to be compiled using a relatively straightforward compiler, to provide low-level access to memory, to provide language constructs that map efficiently to machine instructions, and to require minimal run-time support. It also says that by design, C provides constructs that map efficiently to typical machine instructions, and therefore it has found lasting use in applications that had formerly been coded in assembly language, including operating systems, as well as various application software for computers ranging from supercomputers to embedded systems.
But when I read this & according to Thinking in C++ 2 by Bruce Eckel it says that in Chapter 2 titled Iostreams: (I've omitted some parts)
The big stumbling block is the runtime interpreter used for the
variable-argument list functions. This is the code that parses through
your format string at runtime and grabs and interprets arguments from
the variable argument list. It’s a problem for four reasons.
Because the interpretation happens at runtime there’s a performance
overhead you can’t get rid of. It’s frustrating because all the
information is there in the format string at compile time, but it’s
not evaluated until runtime. However, if you could parse the arguments
in the format string at compile time you could make hard function
calls that have the potential to be much faster than a runtime
interpreter (although the printf( ) family of functions is usually
quite well optimized).
this link also says that:
More type-safe: With , the type of object being I/O’d is
known statically by the compiler. In contrast, cstdio uses "%"
fields to figure out the types dynamically.
So before reading this I was thinking that interpreter isn't used in compiled language like C, but Is it really true that the runtime interpreter also available during execution of C program? Was I wrong before reading this? Is it really so much overhead occurs for this runtime interpretation compared to Iostreams?
What?
This is not runtime interpretation of the code, just inside functions using formatting strings.
Of course they have to loop over the format string to learn about the arguments and the desired formatting, which takes time.
If you are programming with the C language for a microprocessor that does not have an FPU, does the compiler signal errors when floating point literals and keywords are encountered (0.75, float, double, etc)?
Also, what happens if the result of an expression is fractional?
I understand that there are software libraries that are used so you can do floating-point math, but I am specifically wondering what the results will be if you did not use one.
Thanks.
A C implementation is required to implement the types float and double, and arithmetic expressions involving them. So if the compiler knows that the target architecture doesn't have floating-point ops then it must bring in a software library to do it. The compiler is allowed to link against an external library, it's also allowed to implement floating point ops in software by itself as intrinsics, but it must somehow generate code to get it done.
If it doesn't do so [*] then it is not a conforming C implementation, so strictly speaking you're not "programming with the C language". You're programming with whatever your compiler docs tell you is available instead.
You'd hope that code involving float or double types will either fail to compile (because the compiler knows you're in a non-conforming mode and tells you) or else fails to link (because the compiler emits calls to emulation routines in the library, but the library is missing). But you're on your own as far as C is concerned, if you use something that isn't C.
I don't know the exact details (how old do I look?), but I imagine that back in the day if you took some code compiled for x87 then you might be able to link and load it on a system using an x86 with no FPU. Then the CPU would complain about an illegal instruction when you tried to execute it -- quite possibly the system would hang depending what OS you were running. So the worst possible case is pretty bad.
what happens if the result of an expression is fractional?
The actual result of an expression won't matter, because the expression itself was either performed with integer operations (in which case the result is not fractional) or else with floating-point operations (in which case the problem arises before you even find out the result).
[*] or if you fail to specify the options to make it do so ;-)
Floating-point is a required part of the C language, according to the C standard. If the target hardware does not have floating-point instructions, then a C implementation must provide floating-point operations in some other way, such as by emulating them in software. (All calculations are just functions of bits. If you have elementary operations for manipulating bits and performing tests and branches, then you can compute any function that a general computer can.)
A compiler could provide a subset of C without floating-point, but then it would not be a standard-compliant C compiler.
Software floating point can take two forms:
a compiler may generate calls to built-in floating point functions directly - for example the operation 1.2 * 2.5 may invoke (for example) fmul( 1.2, 2.5 ),
alternatively for architectures that support an FPU, but for which some device variants may omit it, it is common to use FPU emulation. When an FP instruction is encountered an invalid instruction exception will occur and the exception handler will vector to code that emulates the instruction.
FPU emulation has the advantage that when the same code is executed on a device with a real FPU, it will be used automatically and accelerate execution. However without an FPU there is usually a small overhead compared with direct software implementation, so if the application is never expected to run on an FPU, emulation might best be avoided is the compiler provides the option.
Software floating point is very much slower that hardware supported floating point. Use of fixed-point techniques can improve performance with acceptable precision in many cases.
Typically, such microprocessor comes along either with a driver-package or even with a complete BSP (board-support-package, consisting of drivers and OS linked together), both of which contain FP library routines.
The compiler replaces every floating-point operation with an equivalent function call. This should be taken into consideration, especially when invoking such operations iteratively (inside a for / while loop), since the compiler cannot apply loop-unrolling optimization as a result.
The result of not including the required libraries within the project would be linkage errors.
I have to work on a fortran program, which used to be compiled using Microsoft Compaq Visual Fortran 6.6. I would prefer to work with gfortran but I have met lots of problems.
The main problem is that the generated binaries have different behaviours. My program takes an input file and then has to generate an output file. But sometimes, when using the binary compiled by gfortran, it crashes before its end, or gives different numerical results.
This a program written by researchers which uses a lot of float numbers.
So my question is: what are the differences between these two compilers which could lead to this kind of problem?
edit:
My program computes the values of some parameters and there are numerous iterations. At the beginning, everything goes well. After several iterations, some NaN values appear (only when compiled by gfortran).
edit:
Think you everybody for your answers.
So I used the intel compiler which helped me by giving some useful error messages.
The origin of my problems is that some variables are not initialized properly. It looks like when compiling with compaq visual fortran these variables take automatically 0 as a value, whereas with gfortran (and intel) it takes random values, which explain some numerical differences which add up at the following iterations.
So now the solution is a better understanding of the program to correct these missing initializations.
There can be several reasons for such behaviour.
What I would do is:
Switch off any optimization
Switch on all debug options. If you have access to e.g. intel compiler, use ifort -CB -CU -debug -traceback. If you have to stick to gfortran, use valgrind, its output is somewhat less human-readable, but it's often better than nothing.
Make sure there are no implicit typed variables, use implicit none in all the modules and all the code blocks.
Use consistent float types. I personally always use real*8 as the only float type in my codes. If you are using external libraries, you might need to change call signatures for some routines (e.g., BLAS has different routine names for single and double precision variables).
If you are lucky, it's just some variable doesn't get initialized properly, and you'll catch it by one of these techniques. Otherwise, as M.S.B. was suggesting, a deeper understanding of what the program really does is necessary. And, yes, it might be needed to just check the algorithm manually starting from the point where you say 'some NaNs values appear'.
Different compilers can emit different instructions for the same source code. If a numerical calculation is on the boundary of working, one set of instructions might work, and another not. Most compilers have options to use more conservative floating point arithmetic, versus optimizations for speed -- I suggest checking the compiler options that you are using for the available options. More fundamentally this problem -- particularly that the compilers agree for several iterations but then diverge -- may be a sign that the numerical approach of the program is borderline. A simplistic solution is to increase the precision of the calculations, e.g., from single to double. Perhaps also tweak parameters, such as a step size or similar parameter. Better would be to gain a deeper understanding of the algorithm and possibly make a more fundamental change.
I don't know about the crash but some differences in the results of numerical code in an Intel machine can be due to one compiler using 80-doubles and the other 64-bit doubles, even if not for variables but perhaps for temporary values. Moreover, floating-point computation is sensitive to the order elementary operations are performed. Different compilers may generate different sequence of operations.
Differences in different type implementations, differences in various non-Standard vendor extensions, could be a lot of things.
Here are just some of the language features that differ (look at gfortran and intel). Programs written to fortran standard work on every compiler the same, but a lot of people don't know what are the standard language features, and what are the language extensions, and so use them ... when compiled with a different compiler troubles arise.
If you post the code somewhere I could take a quick look at it; otherwise, like this, 'tis hard to say for certain.