What does "real*8" mean? - fortran

The manual of a program written in Fortran 90 says, "All real variables and parameters are specified in 64-bit precision (i.e. real*8)."
According to Wikipedia, single precision corresponds to 32-bit precision, whereas double precision corresponds to 64-bit precision, so apparently the program uses double precision.
But what does real*8 mean?
I thought that the 8 meant that 8 digits follow the decimal point. However, Wikipedia seems to say that single precision typically provides 6-9 digits whereas double precision typically provides 15-17 digits. Does this mean that the statement "64-bit precision" is inconsistent with real*8?

The 8 refers to the number of bytes that the data type uses.
So a 32-bit integer is integer*4 along the same lines.
A quick search found this guide to Fortran data types, which includes:
The "real*4" statement specifies the variable names to be single precision 4-byte real numbers which has 7 digits of accuracy and a magnitude range of 10 from -38 to +38. The "real" statement is the same as "real*4" statement in nearly all 32-bit computers.
and
The "real*8" statement specifies the variable names to be double precision 8-byte real numbers which has 15 digits of accuracy and a magnitude range of 10 from -308 to +308. The "double precision" statement is the same as "real*8" statement in nearly all 32-bit computers.

There are now at least 4 ways to specify precision in Fortran.
As already answered, real*8 specifies the number of bytes. It is somewhat obsolete but should be safe.
The new way is with "kinds". One should use the intrinsic functions to obtain the kind that has the precision that you need. Specifying the kind by specific numeric value is risky because different compilers use different values.
Yet another way is to use the named types of the ISO_C_Binding. This question discusses the kind system for integers -- it is very similar for reals.

The star notation (as TYPE*n is called) is a non-standard Fortran construct if used with TYPE other than CHARACTER.
If applied to character type, it creates an array of n characters (or a string of n characters).
If applied to another type, it specifies the storage size in bytes. This should be avoided at any cost in Fortran 90+, where the concept of type KIND is introduced. Specifying storage size creates non-portable applications.

Related

How to set Integer and Fractional Precision independently?

I'm learning Fortran(with the Fortran 2008 standard) and would like to set my integer part precision and decimal part precision for a real variable independently. How do i do this?
For example, let us say that i would like to declare a real variable that has integer part precision as 3 and fractional part precision as 8.
An example number in this above specification would be say 123.12345678 but 1234.1234567 would not satisfy the given requirement.
Fortran real numbers are FLOATING point numbers. Floating point numbers do not store the integer part and the decimal part. They store a significand and an exponent.
See how floating point numbers work http://en.wikipedia.org/wiki/Floating-point_arithmetic There is usually one floating point format which your CPU uses and you cannot simply choose a different one.
What you are asking for is more like the FIXED point arithmetic, but modern CPUs and Fortran do not support it natively. https://en.wikipedia.org/wiki/Fixed-point_arithmetic
You can use them in various libraries (even probably Fortran) or languages, but they are not native REAL. They are probably implemented in software, not directly in the CPU and are slower.
I ended up writing a function for this in order to use floating points with the .gt./.lt./.ge./.le./.eq. operators without actually modifying the floating points.
function PreciseInt(arg1, arg2) Result(PreciseInt)
real*8 arg1 !Input variable to be converted
integer*4 arg2 !Input # for desired precision to the right of the decimal
integer*4 PreciseInt !Integer representing the real value with desired precision
PreciseInt = idnint(arg1 * real(10**arg2))
end function

gfortran variable declaration to store 32-bit integer netcdf [duplicate]

I'm trying to learn Fortran and I'm seeing a lot of different definitions being passed around and I'm wondering if they're trying to accomplish the same thing. What is the difference between the following?
integer*4
integer(4)
integer(kind=4)
In Fortran >=90, the best approach is use intrinsic functions to specify the precision you need -- this guarantees both portability and that you get the precision that you need. For example, to obtain integers i and my_int that will support at least 8 decimal digits, you could use:
integer, parameter :: RegInt_K = selected_int_kind (8)
integer (kind=RegInt_K) :: i, my_int
Having defined RegInt_K (or whatever name you select) as a parameter, you can use it throughout your code as a symbol. This also makes it easy to change the precision.
Requesting 8 or 9 decimal digits will typically obtain a 4-byte integer.
integer*4 is an common extension going back to old FORTRAN to specify a 4-byte integer. Although, this syntax isn't and was never standard Fortran.
integer (4) or integer (RegInt_K) are short for integer (kind=4) or integer (kind=RegInt_K). integer (4) is not the same as integer*4 and is non-portable -- the language standard does not specify the numeric values of kinds. Most compilers use the kind=4 for 4-byte integers -- for these compilers integer*4 and integer(4) will provide the same integer type -- but there are exceptions, so integer(4) is non-portable and best avoided.
The approach for reals is similar.
UPDATE: if you don't want to specify numeric types by the required precision, but instead by the storage that they will use, Fortran 2008 provides a method. reals and integers can be specified by the number of bits of storage after useing the ISO_FORTRAN_ENV module, for example, for a 4-byte (32-bit) integer:
use ISO_FORTRAN_ENV
integer (int32) :: MyInt
The gfortran manual has documentation under "intrinsic modules".
Just one more explicit explanation what the kind is. The compiler has a table of different numerical types. All integer types are different kinds of the basic type -- integer. Let's say the compiler has 1 byte, 2 byte, 4 byte, 8 byte and 16 byte integer (or real) kinds. In the table the compiler has an index to each of this kind -- this index is the kind number.
Many compilers choose this numbering:
kind number number of bytes
1 1
2 2
4 4
8 8
16 16
But they can choose any other numbering. One of the obvious possibilities is
kind number number of bytes
1 1
2 2
3 4
4 8
5 16
There are indeed compilers (at least g77 and NAG) which choose this approach. There are also options to change this. Therefore kind numbers are not portable integer(kind=4) or integer(4) means a 4 byte integer or a 8-bytes integer depending on the compiler.
integer*4 is portable in the sense it always means 4 bytes. But on the other hand it is not portable because it has never been part of any standard. Programs using this notation are not valid Fortran 77, 90 or any other Fortran.
To see the right options how to set the kind numbers see M.S.B.'s answer.
The same concept holds for real data types. See Fortran 90 kind parameter (the mataap's answer).
I will make reference to this enlightening article, wrote recently by #SteveLionel, and try cover some details that are not present in the other answers so far:
The syntax shown in integer*n or real*n was a common extension provided by compilers long time ago, when different computer architectures started to have different designs for in-memory format of integer and real values, where n was the size in bytes of the value stored. However, that said nothing about range or precision of those values: different implementations of a 16bit integer, for example, could provide different ranges and limit values.
Register sizes could be 8, 12, 16, 30, 32, 36, 48, 60 or 64 bits, some CDC machines had ones-complement integers (allowing minus zero for an integer!), the PDP-11 line had several different floating point formats depending on the series, the IBM 360/370 had "hex normalization" for its floating point, etc [...] So popular were these extensions that many programmers thought (and even today many think) that this syntax is standard Fortran; it isn't!
When Fortran 90 came out, kind parameters were added to the language, along with intrinsic inquiry functions (specially kind, selected_int_kind and selected_real_kind, but also others, like precision, digits, epsilon...) to aid the programmer to specify minimun requirements for precision and range of numeric types (still, no official mention to storage model or bytes). The syntax is integer(kind=n) or even integer(n), where n is a constant value corresponding to a kind of integer supported by the compiler. For literal constants, the syntax is 12_n or 3.4e-2_n.
The advantage of this solution was that Fortran didn't (and still don't) make any assumptions about the implementation details of data-types other than the results of the inquiry functions used to choose the type, so the code is parameterized by the problem being solved, not by the language or the hardware. The gotcha is, as said in other answers, each compiler can choose their kind numbers, thus assuming magic number like integer(4) is not portable.
Also with Fortran 90 came the concept of default kinds, that is what you get when you don't specify a kind.
The default kinds were implementation dependent, though up through Fortran 2008 a compiler was required to support only one integer kind and two real kinds. (That's still true in Fortran 2018, but there's an added requirement that at least one integer kind support 18 decimal digits.) If you write a constant literal without a kind specifier, you got the default kind.
With Fortran 2003 and the inclusion of the intrinsic module ieee_arithmetic, you can inquire and select a real type with IEEE floating point capabilities, if avaliable.
There are architectures where both IEEE and non-IEEE floating types are available, such as the HP (formerly Compaq formerly DEC) Alpha. In this case you can use IEEE_SELECTED_REAL_KIND from intrinsic module IEEE_ARITHMETIC to get an IEEE floating kind. And what if there is no supported kind that meets the requirements? In that case the intrinsics return a negative number which will (usually, depending on context) trigger a compile-time error.
Finally, Fortran 2003 brought the iso_fortran_env intrinsic module, that had functions to inquire the storage size of the types implemented by a compiler, with intrinsics like numeric_storage_size and bit_size. Another addition of Fortran 2003 revision was the iso_c_binding intrinsic module, that provided kind parameter values to guarantee compatibility with C types, in storage, precision and range.
Intrinsic module ISO_C_BINDING declares constants for Fortran types that are interoperable with C types, for example C_FLOAT and C_INT. Use these if you're declaring variables and interfaces interoperable with C.
As a final note, I will mention the recent Fortran 2008 Standard, that extended intrinsic module iso_fortran_env to include named constants int8, int16, int32m int64, real32, real64 and real128, whose values correspond to the kinds of integer and real kinds that occupy the stated number of bits. The gotcha is that those constants only assure storage size, not precision or range. Only use them when this is exactly what you want.
In my view, this is little better than the old *n extension in that it tells you that a type fits in that many bits, but nothing else about it. As an example, there's one compiler where REAL128 is stored in 128 bits but is really the 80-bit "extended precision" real used in the old x86 floating point stack registers. If you use these you might think you're using a portable feature, but really you're not and may get bitten when the kind you get doesn't have the capabilities you need.

fortran default precision of numbers

I chose the precision in my code as :
integer, parameter ::psn=selected_real_kind(15,307)
then I write numbers appearing in the code with _psn, for example :
y=x/6._psn
I have many long expressions in code where many numbers appear in multiplications and divisions.
Now my question is: Is there any way to set precision of all numbers appearing in the code to be in a selected precision without explicitly specifying _psn everywhere?
You can set the default kind of real for the compiler...
ifort:
-r8 Makes default real and complex variables 8 bytes long.
gfortran:
-fdefault-real-8 Set the default real type to an 8 byte wide type.
Then, you can just use 1.e0 in your code...
In a portable manner, no.
If you choose convenience and readability over portability and maintainability, you can use compiler flags to change the default kind of REAL, like the accepted answer says.
It is important to understand that, in doing this, some assumptions that could be made based on the standard are often broken. Here are some examples:
The decimal precision of the double precision real approximation method shall be greater than that of the default real method. If you only promote REAL, that is probably lost, however at least in gfortran the commonly used flag for that end still upholds that rule:
-fdefault-real-8
Set the default real type to an 8 byte wide type. This option also affects the kind of non-double real constants like 1.0. This option promotes the default width of DOUBLE PRECISION and double real constants like 1.d0 to 16 bytes if possible.
A nonpointer scalar object that is default integer, default real, or default logical occupies a single numeric storage unit.
A nonpointer scalar object that is double precision real or default complex occupies two contiguous numeric storage units.
These last two imply there is something called a numeric storage unit, which is a basic building block of storage for numeric types, and default REAL, INTEGER and LOGICAL variables occupy 1 of it, while default DOUBLE PRECISION and COMPLEX occupy 2 of it. Needless to say, those contracts are also probably broken when you change default kinds.
It is possible (and even probable) that the affected code does not rely on any of those anyway, and in that case it would be harmless to use such flags. However, it remains important to understand the risks.

How to calculate numbers to arbitrarily high precision?

I wrote a simple fortran program to compute Gauss's constant :
program main
implicit none
integer :: i, nit
double precision :: u0, v0, ut, vt
nit=60
u0=1.d0
v0=sqrt(2.d0)
print *,1.d0/u0,1.d0/v0
do i=1,nit
ut=sqrt(u0*v0)
vt=(u0+v0)/2.d0
u0=ut
v0=vt
print *,1.d0/u0,1.d0/v0
enddo
end program main
Result is 0.83462684167407308 after 4 iterations. Anyway to have better results using the arithmetico-geometric mean method? How do people compute many digits for numbers such as pi, Euler's constant, and so on ? Does each irrational number has a specific algorithm?
If you goal is to insert a constant value into your program, the easiest solution is to look up the value on the web in or a book. Be sure to add a type specification to the numeric value, other Fortran will treat it as the default of single precision. One could write pi as pi_quad = 3.14159265358979323846264338327950288_real128 -- showing the use of a type specifier on a constant.
If you want to do high precision calculations, you could some high precision type available in your compiler. Many compilers now have quadruple precision. If they have the Fortran 2008 version of the ISO_FORTRAN_ENV module, you can request this via the type real128.
Arbitrary precision (user specified number of digits, to very high number of digits) is outside the language and is available in libraries, e.g., MPFUN90, http://crd-legacy.lbl.gov/~dhbailey/mpdist/
Yes, different constants have various algorithms. This is a very large topic.
Solution for pi:
pi = 4.0d0 * datan(1.0d0)

Does the dot in the end of a float suggest lack of precision?

When I debug my software in VS C++ by stepping the code I notice that some float calculations show up as a number with a trailing dot, i.e.:
1232432.
One operation that lead up to this result is this:
float result = pow(10, a * 0.1f) / b
where a is a large negative number around -50 to -100 and b is most often around 1. I read some articles about problem with precision when it comes to floating-points. My question is just if the trailing dot is a Visual-Studio-way of telling me that the precision is very low on this number, i.e. in the variable result. If not, what does it mean?
This came up at work today and I remember that there was a problem for larger numbers so this did to occur every time (and by "this" I mean that trailing dot). But I do remember that it happened when there was seven digits in the number. Here they wright that the precision of floats are seven digits:
C++ Float Division and Precision
Can this be the thing and Visual Studio tells me this by putting a dot in the end?
I THINK I FOUND IT! It says "The mantissa is specified as a sequence of digits followed by a period". What does the mantissa mean? Can this be different on a PC and when running the code on a DSP? Because the thing is that I get different results and the only thing that looks strange to me is this period-thing, since I don't know what it means.
http://msdn.microsoft.com/en-us/library/tfh6f0w2(v=vs.71).aspx
If you're referring to the "sig figs" convention where "4.0" means 4±0.1 and "4.00" means 4±0.01, then no, there's no such concept in float or double. Numbers are always* stored with 24 or 53 significant bits (7.22 or 15.95 decimal digits) regardless of how many are actually "significant".
The trailing dot is just a decimal point without any digits after it (which is a legal C literal). It either means that
The value is 1232432.0 and they trimed the unnecessary trailing zero, OR
Everything is being rounded to 7 significant digits (in which case the true value might also be 1232431.5, 1232431.625, 1232431.75, 1232431.875, 1232432.125, 1232432.25, 1232432.375, or 1232432.5.)
The real question is, why are you using float? double is the "normal" floating-point type in C(++), and float a memory-saving optimization.
* Pedants will be quick to point out denormals, x87 80-bit intermediate values, etc.
The precision is not variable, that is simply how VS is formatting it for display. The precision (or lackof) is always constant for a given floating point number.
The MSDN page you linked to talks about the syntax of a floating-point literal in source code. It doesn't define how the number will be displayed by whatever tool you're using. If you print a floating-point number using either printf or std:cout << ..., the language standard specifies how it will be printed.
If you print it in the debugger (which seems to be what you're doing), it will be formatted in whatever way the developers of the debugger decided on.
There are a number of different ways that a given floating-point number can be displayed: 1.0, 1., 10.0E-001, and .1e+1 all mean exactly the same thing. A trailing . does not typically tell you anything about precision. My guess is that the developers of the debugger just used 1232432. rather than 1232432.0 to save space.
If you're seeing the trailing . for some values, and a decimal number with no . at all for others, that sounds like an odd glitch (possibly a bug) in the debugger.
If you're wondering what the actual precision is, for IEEE 32-bit float (the format most computers use these days), the next representable numbers before and after 1232432.0 are 1232431.875 and 1232432.125. (You'll get much better precision using double rather than float.)