Fortran precision default with/out compiler flag - fortran

I'm having trouble with the precision of constant numerics in Fortran.
Do I need to write every 0.1 as 0.1d0 to have double precision? I know the compiler has a flag such as -fdefault-real-8 in gfortran that solves this kind of problem. Would it be a portable and reliable way to do? And how could I check if the flag option actually works for my code?
I was using F2py to call Fortran code in my Python code, and it doesn't report an error even if I give an unspecified flag, and that's what's worrying me.

In a Fortran program 1.0 is always a default real literal constant and 1.0d0 always a double precision literal constant.
However, "double precision" means different things in different contexts.
In Fortran contexts "double precision" refers to a particular kind of real which has greater precision than the default real kind. In more general communication "double precision" is often taken to mean a particular real kind of 64 bits which matches the IEEE floating point specification.
gfortran's compiler flag -fdefault-real-8 means that the default real takes 8 bytes and is likely to be that which the compiler would use to represent IEEE double precision.
So, 1.0 is a default real literal constant, not a double precision literal constant, but a default real may happen to be the same as an IEEE double precision.
Questions like this one reflect implications of precision in literal constants. To anyone who asked my advice about flags like -fdefault-real-8 I would say to avoid them.

Adding to #francescalus's response above, in my opinion, since the double precision definition can change across different platforms and compilers, it is a good practice to explicitly declare the desired kind of the constant using the standard Fortran convention, like the following example:
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
write(*,"(*(g20.15))") "real64: ", 2._RK / 3._RK
write(*,"(*(g20.15))") "double precision: ", 2.d0 / 3.d0
write(*,"(*(g20.15))") "single precision: ", 2.e0 / 3.e0
end program test
Compiling this code with gfortran gives:
$gfortran -std=gnu *.f95 -o main
$main
real64: .666666666666667
double precision: .666666666666667
single precision: .666666686534882
Here, the results in the first two lines (explicit request for 64-bit real kind, and double precision kind) are the same. However, in general, this may not be the case and the double precision result could depend on the compiler flags or the hardware, whereas the real64 kind will always conform to 64-bit real kind computation, regardless of the default real kind.
Now consider another scenario where one has declared a real variable to be of kind 64-bit, however, the numerical computation is done in 32-bit precision,
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
real(RK) :: real_64
real_64 = 2.e0 / 3.e0
write(*,"(*(g30.15))") "32-bit accuracy is returned: ", real_64
real_64 = 2._RK / 3._RK
write(*,"(*(g30.15))") "64-bit accuracy is returned: ", real_64
end program test
which gives the following output,
$gfortran -std=gnu *.f95 -o main
$main
32-bit accuracy is returned: 0.666666686534882
64-bit accuracy is returned: 0.666666666666667
Even though the variable is declared as real64, the results in the first line are still wrong, in the sense that they do not conform to double precision kind (64-bit that you desire). The reason is that the computations are first done in the requested (default 32-bit) precision of the literal constants and then stored in the 64-bit variable real_64, hence, getting a different result from the more accurate answer on the second line in the output.
So the bottom-line message is: It is always a good practice to explicitly declare the kind of the literal constants in Fortran using the "underscore" convention.

The answer to your question is : Yes you do need to indicate the constant is double precision. Using 0.1 is a common example of this problem, as the 4-byte and 8-byte representations are different. Other constants (eg 0.5) where the extended precision bytes are all zero don't have this problem.
This was introduced into Fortran at F90 and has caused problems for conversion and reuse of many legacy FORTRAN codes. Prior to F90, the result of double precision a = 0.1 could have used a real 0.1 or double 0.1 constant, although all compilers I used provided a double precision value. This can be a common source of inconsistent results when testing legacy codes with published results. Examples are frequently reported, eg PI=3.141592654 was in code on a forum this week.
However, using 0.1 as a subroutine argument has always caused problems, as this would be transferred as a real constant.
So given the history of how real constants have been handled, you do need to explicitly specify a double precision constant when it is required. It is not a user friendly approach.

Related

Very small numbers in Fortran

Is it possible to achieve very small numbers in Fortran like 1E-1200? I know how to do it in python, but my code for master's thesis runs too slow. My supervisor recommends me to use Fortran, but I am not sure if this is a problem.
The previous answers suggesting use of REAL128 from ISO_FORTRAN_ENV describe a non-portable solution. What you would get here is a real type whose representation is 128 bits, but that says nothing about the range or precision of the type! For example, some IBM systems have a 128-bit real kind that is actually two doubles with an offset exponent. This gets you more precision but not significantly more range.
The proper way to do this is to use the SELECTED_REAL_KIND intrinsic function to determine the implementation's kind that supports the desired range. For example:
integer, parameter :: bigreal = SELECTED_REAL_KIND(R=1200)
real(KIND=bigreal) :: x
If the implementation doesn't have a real kind that can represent a value with a decimal exponent of plus or minus 1200, you'll get an error, otherwise you'll get the smallest appropriate kind.
You could also specify a P= value in the call to indicate what minimum precision (in decimal digits) you need.
The short answer is yes.
Modern compilers typically support so-called quad precision, a 128-bit real. A portable way to access this type is using the ISO_FORTRAN_ENV. Here is a sample program to show how big and small these numbers get:
program main
use ISO_FORTRAN_ENV, only : REAL32, REAL64, REAL128
! -- tiny and huge grab the smallest and largest
! -- representable number of each type
write(*,*) 'Range for REAL32: ', tiny(1._REAL32), huge(1._REAL32)
write(*,*) 'Range for REAL62: ', tiny(1._REAL64), huge(1._REAL64)
write(*,*) 'Range for REAL128: ', tiny(1._REAL128), huge(1._REAL128)
end program main
The types REAL32, REAL64, and REAL128 are typically known as single, double, and quad precision. Longer types have a larger range of representable numbers and greater precision.
On my machine with gfortran 4.8, I get:
mach5% gfortran types.f90 && ./a.out
Range for REAL32: 1.17549435E-38 3.40282347E+38
Range for REAL62: 2.2250738585072014E-308 1.7976931348623157E+308
Range for REAL128: 3.36210314311209350626E-4932 1.18973149535723176502E+4932
As you can see, quad precision can represent numbers as small as 3.4E-4932.
Most Fortran compilers support REAL128 data format which matches IEEE binary128. Some support also REAL80 with similar range, matching C long double. These don't have performance of REAL64 but should be much faster than python.

Defining constants of sufficient precision in Fortran [duplicate]

I have the following Fortran code:
Program Strange
Real(Kind=8)::Pi1=3.1415926535897932384626433832795028841971693993751058209;
Real(Kind=8)::Pi2=3.1415926535897932384626433832795028841971693993751058209_8;
Print*, "Pi1=", Pi1;
Print*, "Pi2=", Pi2;
End Program Strange
I compile with gfortran, and the output is:
Pi1= 3.1415927410125732
Pi2= 3.1415926535897931
Of course the second is correct, but should this be the case? It seems like Pi1 is being input to memory as a single precision number, and then put into a double precision memory slot. But this seems like an error to me. Am I correct?
I do know a bit of Fortran ! #Dougal's answer is correct though the snippet he quotes from is not, embedding the letter d into a real literal constant is not required (since Fortran 90), indeed many Fortran programmers now regard that approach as archaic. The snippet is also misleading in advising the use of 3.1415926535d+0 to initialise a 64-bit floating-point value for pi, it doesn't set enough of the digits to their correct values.
The statement:
Real(Kind=8)::Pi1=3.1415926535897932384626433832795028841971693993751058209
defines Pi1 to be a real variable of kind 8. The literal real value 3.1415926535897932384626433832795028841971693993751058209 is, however, a real value of default kind, most likely to be a 4-byte real on most current compilers. That seems to explain your output but do check your documentation.
On the other hand, the literal real value Pi2=3.1415926535897932384626433832795028841971693993751058209_8 is, by the suffixing of the kind specification, declared to be of kind=8 which is the same as the kind of the variable it is assigned to.
Three more points:
1) Don't fall into the trap of thinking that kind=8 means the same thing as 64-bit floating-point number or double. For many compilers it does, for some it doesn't. Kind numbers are not portable between Fortran implementations. They are, according to the standard, arbitrary positive integers. Better, with a modern compiler, would be to use the predefined constants from the intrinsic module iso_fortran_env, e.g.
use, intrinsic :: iso_fortran_env
...
real(real64) :: pi = 3.14159265358979323846264338_real64
There are other portable approaches to setting variable kinds using functions such as selected_real_kind.
2) Since the value of pi is unlikely to change during the execution of your program you might care to make it a parameter thus:
real(real64), parameter :: pi = 3.14159265358979323846264338_real64
3) It isn't necessary (or usual) to end Fortran statements with a ';' unless you want to have more than one statement on the same line in the source file.
I don't really know Fortran, but this page says:
The letter "d" must be embedded in the literal, otherwise, the compiler's pre-processor would round it off to be a Single Precision literal. For example, 3.1415926535 would be read as 3.141593 while 3.1415926535d+0 would be stored with all the digits intact. The letter "d" for double precision numbers has the same meaning as "e" for single precision numbers.
So it seems like your guess is correct.

real(8) resolution, data overflow

I have the following code which does not compile using gfortran:
program test_overflow
real(8) a,b
b=0.d0
a=1e39
write(*,*) a*b
end program
The error output from gfortran is
test.f90:4.14:
a=1e39
1
Error: Real constant overflows its kind at (1)
I wonder what is the issue here. As far as I remember, real(8) should give a double precision range of 10 to the power of -100 to +100 (approximately), am I wrong about this?
Use a=1d39 instead of a=1e39.
e denotes single precision.
d denotes double precision.
You may want to refer to the documentation for Double Precision Constants.
The literal
1e139
is a default real. It doesn't matter what is on the left-hand side of the assignment, Fortran always evaluates expressions without the surrounding context. The value 10^139 is too large for the default real in your processor.
real(8) may or may not give you the double precision. Often it does, but it is compiler specific. If you insist on the kind 8, although I won't recommend it, you get floating point literals of the same kind by using the suffix
1e139_8
It is better to place the value 8 to a named constant
integer, parameter :: wp = 8
(though it is better to use other methods, than the magic number 8)
and then you can do
real(wp) :: x
and
1e139_wp
There is an odl method, which distinguishes just default (sometimes called single precision) and double precision reals. In that case you declare your variables as
double precision :: x
and literals as
1d139
Mixing the two methods (real(8) and 1d139) is strange.

Difference between code specified double precision and compiler option double precision

When writing a Fortran code, declaring a real variable by assigning kind=8 or double precision is one way of ensuring double precision. Another way is by not declaring anything explicitly in the code but rather using compiler options i.e -r8 (ifort) etc.
Is there a difference between the two?
Carefuly read this, though it is not 100 percent duplicate:
Fortran: integer*4 vs integer(4) vs integer(kind=4) and
Fortran 90 kind parameter.
kind=8 is non-portable and will not work in some compilers and teaching it to beginners, which can be seen even in one Coursera course should be criminally prosecuted.
There are different compiler options with different effect. Ones promote all 4 byte reals or integers to 8 byte ones. Other set default kind to be 8. This breaks some standard assumptions about storage and about double precision being able to hold more than the default kind. For example in gfortran:
-fdefault-real-8
sets the default real as 8 byte real. This is not completely the same as:
-freal-4-real-8
which promotes all 4 byte reals to 8 byte. You must be careful to understand the differences and generally using these options is not a very good practice.
The recommended solution is always the same, using a named constant:
integer, parameter :: rp = ...
and always using
real(rp) :: x
You can have more of these constants. Set the values of the constants according to the referenced question (real64, kind(1.d0), selected_real_kind(...)).

fortran default precision of numbers

I chose the precision in my code as :
integer, parameter ::psn=selected_real_kind(15,307)
then I write numbers appearing in the code with _psn, for example :
y=x/6._psn
I have many long expressions in code where many numbers appear in multiplications and divisions.
Now my question is: Is there any way to set precision of all numbers appearing in the code to be in a selected precision without explicitly specifying _psn everywhere?
You can set the default kind of real for the compiler...
ifort:
-r8 Makes default real and complex variables 8 bytes long.
gfortran:
-fdefault-real-8 Set the default real type to an 8 byte wide type.
Then, you can just use 1.e0 in your code...
In a portable manner, no.
If you choose convenience and readability over portability and maintainability, you can use compiler flags to change the default kind of REAL, like the accepted answer says.
It is important to understand that, in doing this, some assumptions that could be made based on the standard are often broken. Here are some examples:
The decimal precision of the double precision real approximation method shall be greater than that of the default real method. If you only promote REAL, that is probably lost, however at least in gfortran the commonly used flag for that end still upholds that rule:
-fdefault-real-8
Set the default real type to an 8 byte wide type. This option also affects the kind of non-double real constants like 1.0. This option promotes the default width of DOUBLE PRECISION and double real constants like 1.d0 to 16 bytes if possible.
A nonpointer scalar object that is default integer, default real, or default logical occupies a single numeric storage unit.
A nonpointer scalar object that is double precision real or default complex occupies two contiguous numeric storage units.
These last two imply there is something called a numeric storage unit, which is a basic building block of storage for numeric types, and default REAL, INTEGER and LOGICAL variables occupy 1 of it, while default DOUBLE PRECISION and COMPLEX occupy 2 of it. Needless to say, those contracts are also probably broken when you change default kinds.
It is possible (and even probable) that the affected code does not rely on any of those anyway, and in that case it would be harmless to use such flags. However, it remains important to understand the risks.