Very small numbers in Fortran - fortran

Is it possible to achieve very small numbers in Fortran like 1E-1200? I know how to do it in python, but my code for master's thesis runs too slow. My supervisor recommends me to use Fortran, but I am not sure if this is a problem.

The previous answers suggesting use of REAL128 from ISO_FORTRAN_ENV describe a non-portable solution. What you would get here is a real type whose representation is 128 bits, but that says nothing about the range or precision of the type! For example, some IBM systems have a 128-bit real kind that is actually two doubles with an offset exponent. This gets you more precision but not significantly more range.
The proper way to do this is to use the SELECTED_REAL_KIND intrinsic function to determine the implementation's kind that supports the desired range. For example:
integer, parameter :: bigreal = SELECTED_REAL_KIND(R=1200)
real(KIND=bigreal) :: x
If the implementation doesn't have a real kind that can represent a value with a decimal exponent of plus or minus 1200, you'll get an error, otherwise you'll get the smallest appropriate kind.
You could also specify a P= value in the call to indicate what minimum precision (in decimal digits) you need.

The short answer is yes.
Modern compilers typically support so-called quad precision, a 128-bit real. A portable way to access this type is using the ISO_FORTRAN_ENV. Here is a sample program to show how big and small these numbers get:
program main
use ISO_FORTRAN_ENV, only : REAL32, REAL64, REAL128
! -- tiny and huge grab the smallest and largest
! -- representable number of each type
write(*,*) 'Range for REAL32: ', tiny(1._REAL32), huge(1._REAL32)
write(*,*) 'Range for REAL62: ', tiny(1._REAL64), huge(1._REAL64)
write(*,*) 'Range for REAL128: ', tiny(1._REAL128), huge(1._REAL128)
end program main
The types REAL32, REAL64, and REAL128 are typically known as single, double, and quad precision. Longer types have a larger range of representable numbers and greater precision.
On my machine with gfortran 4.8, I get:
mach5% gfortran types.f90 && ./a.out
Range for REAL32: 1.17549435E-38 3.40282347E+38
Range for REAL62: 2.2250738585072014E-308 1.7976931348623157E+308
Range for REAL128: 3.36210314311209350626E-4932 1.18973149535723176502E+4932
As you can see, quad precision can represent numbers as small as 3.4E-4932.

Most Fortran compilers support REAL128 data format which matches IEEE binary128. Some support also REAL80 with similar range, matching C long double. These don't have performance of REAL64 but should be much faster than python.

Related

Fortran precision default with/out compiler flag

I'm having trouble with the precision of constant numerics in Fortran.
Do I need to write every 0.1 as 0.1d0 to have double precision? I know the compiler has a flag such as -fdefault-real-8 in gfortran that solves this kind of problem. Would it be a portable and reliable way to do? And how could I check if the flag option actually works for my code?
I was using F2py to call Fortran code in my Python code, and it doesn't report an error even if I give an unspecified flag, and that's what's worrying me.
In a Fortran program 1.0 is always a default real literal constant and 1.0d0 always a double precision literal constant.
However, "double precision" means different things in different contexts.
In Fortran contexts "double precision" refers to a particular kind of real which has greater precision than the default real kind. In more general communication "double precision" is often taken to mean a particular real kind of 64 bits which matches the IEEE floating point specification.
gfortran's compiler flag -fdefault-real-8 means that the default real takes 8 bytes and is likely to be that which the compiler would use to represent IEEE double precision.
So, 1.0 is a default real literal constant, not a double precision literal constant, but a default real may happen to be the same as an IEEE double precision.
Questions like this one reflect implications of precision in literal constants. To anyone who asked my advice about flags like -fdefault-real-8 I would say to avoid them.
Adding to #francescalus's response above, in my opinion, since the double precision definition can change across different platforms and compilers, it is a good practice to explicitly declare the desired kind of the constant using the standard Fortran convention, like the following example:
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
write(*,"(*(g20.15))") "real64: ", 2._RK / 3._RK
write(*,"(*(g20.15))") "double precision: ", 2.d0 / 3.d0
write(*,"(*(g20.15))") "single precision: ", 2.e0 / 3.e0
end program test
Compiling this code with gfortran gives:
$gfortran -std=gnu *.f95 -o main
$main
real64: .666666666666667
double precision: .666666666666667
single precision: .666666686534882
Here, the results in the first two lines (explicit request for 64-bit real kind, and double precision kind) are the same. However, in general, this may not be the case and the double precision result could depend on the compiler flags or the hardware, whereas the real64 kind will always conform to 64-bit real kind computation, regardless of the default real kind.
Now consider another scenario where one has declared a real variable to be of kind 64-bit, however, the numerical computation is done in 32-bit precision,
program test
use, intrinsic :: iso_fortran_env, only: RK => real64
implicit none
real(RK) :: real_64
real_64 = 2.e0 / 3.e0
write(*,"(*(g30.15))") "32-bit accuracy is returned: ", real_64
real_64 = 2._RK / 3._RK
write(*,"(*(g30.15))") "64-bit accuracy is returned: ", real_64
end program test
which gives the following output,
$gfortran -std=gnu *.f95 -o main
$main
32-bit accuracy is returned: 0.666666686534882
64-bit accuracy is returned: 0.666666666666667
Even though the variable is declared as real64, the results in the first line are still wrong, in the sense that they do not conform to double precision kind (64-bit that you desire). The reason is that the computations are first done in the requested (default 32-bit) precision of the literal constants and then stored in the 64-bit variable real_64, hence, getting a different result from the more accurate answer on the second line in the output.
So the bottom-line message is: It is always a good practice to explicitly declare the kind of the literal constants in Fortran using the "underscore" convention.
The answer to your question is : Yes you do need to indicate the constant is double precision. Using 0.1 is a common example of this problem, as the 4-byte and 8-byte representations are different. Other constants (eg 0.5) where the extended precision bytes are all zero don't have this problem.
This was introduced into Fortran at F90 and has caused problems for conversion and reuse of many legacy FORTRAN codes. Prior to F90, the result of double precision a = 0.1 could have used a real 0.1 or double 0.1 constant, although all compilers I used provided a double precision value. This can be a common source of inconsistent results when testing legacy codes with published results. Examples are frequently reported, eg PI=3.141592654 was in code on a forum this week.
However, using 0.1 as a subroutine argument has always caused problems, as this would be transferred as a real constant.
So given the history of how real constants have been handled, you do need to explicitly specify a double precision constant when it is required. It is not a user friendly approach.

Why do I keep getting 0 as output? [duplicate]

This question already has an answer here:
Why are the elements of an array formatted as zeros when they are multiplied by 1/2 or 1/3?
(1 answer)
Closed 5 years ago.
Why does this fortran program produce only zeros? When I print it out i get -0.00000 everywhere! What have I done wrong? In matlab it runs perfectly. I dont see any reason why its not working to be honest!
It seems like its the fraction that messes it up. if I set x equal to some decimal number it works.
program main
implicit none
integer iMax, jMax
double precision, dimension(:,:), allocatable :: T
double precision x, dx,f,L2old,L2norm,y
integer i, j,n,bc
n=10
allocate(T(1:n+2, 1:n+2))
T=0.0d0
do i=2,n+1
do j=2,n+1
x=(j+1)*1/24
y=(i+1)*1/24
T(i,j)= -18*(x**2+y**2)**2
Write(*,*)'T(',i,'',j,'', T(i,j)
end do
end do
Write(*,*)'T(1,1)',T(1,1)
end program main
x=(j+1)*1/24
1/24 is an integer division that rounds down to 0. You should be able to force floating point division by making at least one of the operands floating point,
e.g.
x=(j+1)*1.0/24.0
As was indicated by Jim Lewis, the answer to the OP's question was indeed the integer division used.
Nonehteless, I think it is important to point out that one should take care of how the floating point fraction is written down. As the OP's program shows, x was of type DOUBLE PRECISION. Then the correct result should be
x=(j+1)*1.0D0/24.0D0
The difference here is that now you ensure that the division happens with the same precision as x was declared.
To following program demonstrates the problem ::
program test
WRITE(*,'(A43)') "0.0416666666666666666666666666666666..."
WRITE(*,'(F40.34)') 1/24
WRITE(*,'(F40.34)') 1.0/24.0
WRITE(*,'(F40.34)') 1.0D0/24.0
WRITE(*,'(F40.34)') 1.0D0/24.0D0
end program test
which as the output
0.0416666666666666666666666666666666...
0.0000000000000000000000000000000000
0.0416666679084300994873046875000000
0.0416666666666666643537020320309239
0.0416666666666666643537020320309239
You clearly see the differences. The first line is the mathematical correct result. The second line is the integer division leading to zero. The third line, shows the output in case the division is computed as REAL while the fourth and fifth line are in DOUBLE PRECISION. Please take into account that in my case REAL implies a 32bit floating point number and DOUBLE PRECISION a 64 bit version. The precision and representation of both REAL and DOUBLE PRECISION is compiler dependent and not defined in the Standard. It only requires that DOUBLE PRECISION has a higher precision than REAL.
4.4.2.3 Real type
1 The real type has values that approximate the mathematical real numbers. The processor shall provide two or more approximation methods that define sets of values for data of type real. Each such method has a representation method and is characterized by a value for the kind type parameter KIND. The kind type parameter of an approximation method is returned by the intrinsic function KIND (13.7.89).
5 If the type keyword REAL is used without a kind type parameter, the
real type with default real kind is specified and the kind value is
KIND (0.0). The type specifier DOUBLE PRECISION specifies type real
with double precision kind; the kind value is KIND (0.0D0). The
decimal precision of the double precision real approximation method
shall be greater than that of the default real method.
This actually implies that, if you want to ensure that your computations are done using 32bit, 64bit or 128bit floating point representations, you are advised to use the correct KIND values as defined in the intrinsic module ISO_FORTRAN_ENV.
13.8.2.21 REAL32, REAL64, and REAL128
1 The values of these default integer scalar named constants shall be
those of the kind type parameters that specify a REAL type whose
storage size expressed in bits is 32, 64, and 128 respectively. If,
for any of these constants, the processor supports more than one kind
of that size, it is processor dependent which kind value is provided.
If the processor supports no kind of a particular size, that constant
shall be equal to −2 if the processor supports kinds of a larger size
and −1 otherwise.
So this would lead to the following code
PROGRAM main
USE iso_fortran_env, ONLY : DP => REAL64
IMPLICIT NONE
...
REAL(DP) :: x
...
x = (j+1)*1.0_DP/24.0_DP
...
END PROGRAM main

How to set Integer and Fractional Precision independently?

I'm learning Fortran(with the Fortran 2008 standard) and would like to set my integer part precision and decimal part precision for a real variable independently. How do i do this?
For example, let us say that i would like to declare a real variable that has integer part precision as 3 and fractional part precision as 8.
An example number in this above specification would be say 123.12345678 but 1234.1234567 would not satisfy the given requirement.
Fortran real numbers are FLOATING point numbers. Floating point numbers do not store the integer part and the decimal part. They store a significand and an exponent.
See how floating point numbers work http://en.wikipedia.org/wiki/Floating-point_arithmetic There is usually one floating point format which your CPU uses and you cannot simply choose a different one.
What you are asking for is more like the FIXED point arithmetic, but modern CPUs and Fortran do not support it natively. https://en.wikipedia.org/wiki/Fixed-point_arithmetic
You can use them in various libraries (even probably Fortran) or languages, but they are not native REAL. They are probably implemented in software, not directly in the CPU and are slower.
I ended up writing a function for this in order to use floating points with the .gt./.lt./.ge./.le./.eq. operators without actually modifying the floating points.
function PreciseInt(arg1, arg2) Result(PreciseInt)
real*8 arg1 !Input variable to be converted
integer*4 arg2 !Input # for desired precision to the right of the decimal
integer*4 PreciseInt !Integer representing the real value with desired precision
PreciseInt = idnint(arg1 * real(10**arg2))
end function

How to calculate numbers to arbitrarily high precision?

I wrote a simple fortran program to compute Gauss's constant :
program main
implicit none
integer :: i, nit
double precision :: u0, v0, ut, vt
nit=60
u0=1.d0
v0=sqrt(2.d0)
print *,1.d0/u0,1.d0/v0
do i=1,nit
ut=sqrt(u0*v0)
vt=(u0+v0)/2.d0
u0=ut
v0=vt
print *,1.d0/u0,1.d0/v0
enddo
end program main
Result is 0.83462684167407308 after 4 iterations. Anyway to have better results using the arithmetico-geometric mean method? How do people compute many digits for numbers such as pi, Euler's constant, and so on ? Does each irrational number has a specific algorithm?
If you goal is to insert a constant value into your program, the easiest solution is to look up the value on the web in or a book. Be sure to add a type specification to the numeric value, other Fortran will treat it as the default of single precision. One could write pi as pi_quad = 3.14159265358979323846264338327950288_real128 -- showing the use of a type specifier on a constant.
If you want to do high precision calculations, you could some high precision type available in your compiler. Many compilers now have quadruple precision. If they have the Fortran 2008 version of the ISO_FORTRAN_ENV module, you can request this via the type real128.
Arbitrary precision (user specified number of digits, to very high number of digits) is outside the language and is available in libraries, e.g., MPFUN90, http://crd-legacy.lbl.gov/~dhbailey/mpdist/
Yes, different constants have various algorithms. This is a very large topic.
Solution for pi:
pi = 4.0d0 * datan(1.0d0)

What does "real*8" mean?

The manual of a program written in Fortran 90 says, "All real variables and parameters are specified in 64-bit precision (i.e. real*8)."
According to Wikipedia, single precision corresponds to 32-bit precision, whereas double precision corresponds to 64-bit precision, so apparently the program uses double precision.
But what does real*8 mean?
I thought that the 8 meant that 8 digits follow the decimal point. However, Wikipedia seems to say that single precision typically provides 6-9 digits whereas double precision typically provides 15-17 digits. Does this mean that the statement "64-bit precision" is inconsistent with real*8?
The 8 refers to the number of bytes that the data type uses.
So a 32-bit integer is integer*4 along the same lines.
A quick search found this guide to Fortran data types, which includes:
The "real*4" statement specifies the variable names to be single precision 4-byte real numbers which has 7 digits of accuracy and a magnitude range of 10 from -38 to +38. The "real" statement is the same as "real*4" statement in nearly all 32-bit computers.
and
The "real*8" statement specifies the variable names to be double precision 8-byte real numbers which has 15 digits of accuracy and a magnitude range of 10 from -308 to +308. The "double precision" statement is the same as "real*8" statement in nearly all 32-bit computers.
There are now at least 4 ways to specify precision in Fortran.
As already answered, real*8 specifies the number of bytes. It is somewhat obsolete but should be safe.
The new way is with "kinds". One should use the intrinsic functions to obtain the kind that has the precision that you need. Specifying the kind by specific numeric value is risky because different compilers use different values.
Yet another way is to use the named types of the ISO_C_Binding. This question discusses the kind system for integers -- it is very similar for reals.
The star notation (as TYPE*n is called) is a non-standard Fortran construct if used with TYPE other than CHARACTER.
If applied to character type, it creates an array of n characters (or a string of n characters).
If applied to another type, it specifies the storage size in bytes. This should be avoided at any cost in Fortran 90+, where the concept of type KIND is introduced. Specifying storage size creates non-portable applications.