Fortran 95: inline evaluation of if-conditions - fortran

Here a small snippet of code that returns epsilon() for a real value:
program epstest
real :: eps=1.0, d
do
d=1.0+eps
if (d==1.0) then
eps=eps*2
exit
else
eps=eps/2
end if
end do
write(*,*) eps, epsilon(d)
pause
end program
Now, when I replace the if condition by
if (1.0+eps==1.0) then
the program should have the same in return but it unfortunately does not! I tested it with the latest (snapshot) release of g95 on Linux and Windows.
Can someone explain that issue to me?

Floating point arithmetic has many subtle issues. One form of source code can generate different machine instructions than another seemingly almost identical source code. For example, "d" might get stored to a memory location, while "1.0 + eps" might be evaluated solely with registers ... this can cause different precision.
More generally, why not use the intrinsic functions provided for Fortran 95 that disclose the characteristics of a particular precision of reals?

Related

FORTRAN 77 Divide By Zero Behavior

I am working on re-engineering an old FORTRAN77 program to Python for a while now. I'm running into an issue, though: when dividing by zero, it appears that the FORTRAN program just continues processing the data without issue. However, predictably it causes an issue in Python. I'm not able to find a discussion about this on any official channel for F77, and I only have an old version of the source code for the program I am translating that I can't get to compile.
TL;DR: How does F77 handle division by zero for the following cases?:
REAL division
INT division
The numerator is also zero (e.g. 0./0.)
Yes, I also have code that does nothing when a divide by zero error is encountered. Usually, it is the programmers responsibility to ensure that the results are either expected (the target variable's value is unchanged) or an error is thrown etc. In other words, you need to inspect any division operation for a possible zero divisor. Modern operating systems throw an internal exception on divide by zero (and assign NAN to the target variable if the system would pause under these circumstances), most older Fortran code is written such that divide by zero doesn't matter.

Adding double precision values yield different results between separate programs in C++

I have a question about floating point addition. I understand how compilers and processor architecture can lead to floating point arithmetic values. I have seen many questions on here similar to my question, but they all have some variation such as different compiler, different code, different machine, etc. However, I'm am running into an issue when adding doubles in the exact same way in two different programs calling the identical function with the same arguments and it is leading to different results. Both programs are compiled on the same machine with the same compiler/tags. The code looks similar to this:
void function(double tx, double ty, double tz){
double answer;
double x,y;
x = y = answer = 0;
x = tx - ty;
y = ty - tz;
answer = (tx + ty + tz) * (x*y)
}
The values of:
tx,ty,tz
are on the order of [10e-15,10e-30]. Obviously this is a very simplified version of the functions I am actually using, but, is it possible for two programs, running identical floating point arithmetic (not just the same function, the exact same code), on the same machine, with the same compiler/tags, to get different results for the function?
Some possibilities:
The source code of function is identical in the two programs, but it appears with different context, resulting in the compiler compiling it in different ways. For example, the compiler might inline it in one place and not another, and inlining might lead to some expression reduction due to combination with other expressions at the point of the inlined call, and hence different arithmetic is performed. (To test this, move function to a separate source file, compile it separately, and link it with a linker without cross-module optimization. Also, try compiling with optimization disabled.)
You think there are identical inputs to function because they appear the same when printed or viewed in the debugger, but they are actually different due to small differences in the low digits that are not printed. (To test this, print the full values using the hexadecimal floating-point format. To do that, insert std::hexfloat into the output stream, followed by floating-point values. Alternately, use a C printf using the %a format.)
Something else in the programs changes floating-point state, such as rounding mode.
You think you have used an identical compiler, identical sources, identical compilation switches, and so on, but actually have not.
David Schwartz notes that floating-point values can change when they are stored, as occurs when they are simply spilled to the stack. This occurs because some processors and C++ implementations may store floating-point values with extended precision in registers but less precision in memory. Technically, this fits into either 1. (different computation nominally inside function) or 2. (different values passed to function), but it is insidious enough to warrant separate mention.
Well the answer is quite easy. If your computer behaves deterministic it will always return the same results for the same input. That's the basic idea behind programming languages so far. (Unless we are talking about quantum computers, of course.)
So the question reduces to whether you really have the same input.
Although the above function looks strictly functional, there are often hidden inputs that are not that obvious. E.g. you might adjust the rounding mode of your FPU before calling the function. Or you might setup different exception behavior. In both cases the function may behave differently for certain inputs.
So even if your computer isn't non-deterministic (i.e. buggy) the above function might return different results. Although it is not that likely.

Error: Old-style type declaration REAL*16 not supported

I was given some legacy code to compile. Unfortunately I only have access to a f95 compiler and have 0 knowledge of Fortran. Some modules compiled but others I was getting this error:
Error: Old-style type declaration REAL*16 not supported at (1)
My plan is to at least try to fix this error and see what else happens. So here are my 2 questions.
How likely will it be that my code written for Fortran 75 is compatible in the Fortran 95 compiler? (In /usr/bin my compiler is f95 - so I assume it is Fortran 95)
How do I fix this error that I am getting? I tried googling it but cannot see to find a clear crisp answer.
The error you are seeing is due to an old declaration style that was frequent before Fortran 90, but never became standard. Thus, the compiler does not accept the (formally incorrect) code.
In the olden days before Fortran 90, you had only two types of real numbers: REAL and DOUBLE PRECISION. These types were platform dependent, but most compilers nowadays map them to the IEEE754 formats binary32 and binary64.
However, some machines offered different formats, often with extra precision. In order to make them accessible to Fortran code, the type REAL*n was invented, with n an integer literal from a set of compiler-dependent values. This syntax was never standard, so you cannot be sure of what it will mean to a given compiler without reading its documentation.
In the real world, most compilers that have not been asked to be strictly standards-compliant (with some option like -std=f95) will recognize at least REAL*4 and REAL*8, mapping them to the binary32/64 formats mentioned before, but everything else is completely platform dependent. Your compiler may have a REAL*10 type for the 80-bit arithmetic used by the x86 387 FPU, or a REAL*16 type for some 128-bit floating point math. However, it is important to stress that since the syntax is not standard, the meaning of that type could change from compiler to compiler.
Finally, in Fortran 90 a way to refer to different kinds of real and integer types was made standard. The new syntax is REAL(n) or REAL(kind=n) and is supported by all standard-compliant compilers. However, The values of n are still compiler-dependent, but the standard provides three ways to obtain specific, repeatable results:
The SELECTED_REAL_KIND function, which allows you to query the system for the value of n to specify if you want a real type with certain precision and range requirements. Usually, what you do is ask for it once and store the result in an INTEGER, PARAMETER variable that you use when declaring the real variables in question. For example, you would declare a type with at least 15 digits of precision (decimal) and exponent range of at least 100 like this:
INTEGER, PARAMETER :: rk = SELECTED_REAL_KIND(15, 100)
REAL(rk) :: v
In Fortran 2003 and onwards, the ISO_C_BINDING module contains a series of constants that are meant to give you types guaranteed to be equivalent to the C types of the same compiler family (e.g. gcc for gfortran, icc for ifort, etc.). They are called C_FLOAT, C_DOUBLE and C_LONG_DOUBLE. Thus, you could declare a variable equivalent to a C double as REAL(C_DOUBLE) :: d.
In Fortran 2008 and onwards, the ISO_FORTRAN_ENV module contains a different series of constants REAL32, REAL64 and REAL128 that will give you a floating point type of the appropriate width - if some platform does not support one of those types, the constant will be a negative number. Thus, you can declare a 128-bit float as REAL(real128) :: q.
Javier has given an excellent answer to your immediate problem. However I'd just briefly like to address "How likely will it be that my code written for Fortran 77 is compatible in the Fortran 95 compiler?", with the obvious typo corrected.
Fortran, if the programmer adheres to the standard, is amazingly backward compatible. With very, very few exceptions standard conforming Fortran 77 is Fortran 2008 conforming. The problem is that it appears in your case the original programmer has not adhered to the international standard: real*8 and similar is not, and has never been part of any such standard and the problems you are seeing are precisely why such forms should never be, and should never have been used. That said if the original programmer only made this one mistake it may well be that the rest of the code will be OK, however without the detail it is impossible to tell
TL;DR: International standards are important, stick to them!
When we are at guessing instead of requesting proper code and full details I will venture to say that the other two answers are not correct.
The error message
Error: Old-style type declaration REAL*16 not supported at (1)
DOES NOT mean that the REAL*n syntax is not supported.
The error message is misleading. It actually means that the 16-byte reals are no supported. Had the OP requested the same real by the kind notation (in any of the many ways which return the gfortran's kind 16) the error message would be:
Error: Kind 16 not supported for type REAL at (1)
That can happen in some gfortran versions. Especially in MS Windows.
This explanation can be found with just a very quick google search for the error message: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56850#c1
It does not not complain that the old syntax is the problem, it just mentions it, because mentioning kinds might be confusing even more (especially for COMPLEX*32 which is kind 16).
The main message is: We really should be closing these kinds of questions and wait for proper details instead of guessing and upvoting the question where the OP can't even copy the full message the compiler has spit.
If you are using GNU gfortran, try "real(kind=10)", which will give you 10 bytes (80-bits) extended precision. I'm converting some older F77 code to run floating point math tests - it has a quad-precision ( the real*16 you mention) defined, but not used, as the other answers provided correctly point out (extended precision formats tend to be machine/compiler specific). The gfortran 5.2 I am using does not support real(kind=16), surprisingly. But gfortran (available for 64-bit MacOS and 64-bit Linux machines), does have the "real(kind=10)" which will give you precision beyond the typical real*8 or "double precision" as it was called in some Fortran compilers.
Be careful if your old Fortran code is calling C programs and/or maybe making assumptions about how precision is handled and floating point numbers are represented. You may have to get deep into exactly what is happening, and check the code carefully, to ensure things are working as expected, especially if fortran and C routines are calling each other. Here is url for the GNU gfortran info on quad-precision: https://people.sc.fsu.edu/~jburkardt/f77_src/gfortran_quadmath/gfortran_quadmath.html
Hope this helps.

Trying to use netlib code (QUADPACK). What is xerror?

I'm trying to figure out how to use quadpack.
In a single folder, I located the contents of "qag.f plus dependencies" and the code blow as qag_test.f:
(maybe this code itself is not very important. This is in fact just a snippet from the quadpack document)
REAL A,ABSERR,B,EPSABS,EPSREL,F,RESULT,WORK
INTEGER IER,IWORK,KEY,LAST,LENW,LIMIT,NEVAL
DIMENSION IWORK(100),WORK(400)
EXTERNAL F
A = 0.0E0
B = 1.0E0
EPSABS = 0.0E0
EPSREL = 1.0E-3
KEY = 6
LIMIT = 100
LENW = LIMIT*4
CALL QAG(F,A,B,EPSABS,EPSREL,KEY,RESULT,ABSERR,NEVAL,
* IER,LIMIT,LENW,LAST,IWORK,WORK)
C INCLUDE WRITE STATEMENTS
STOP
END
C
REAL FUNCTION F(X)
REAL X
F = 2.0E0/(2.0E0+SIN(31.41592653589793E0*X))
RETURN
END
Using gfortran *.f (installed as MinGW 64bit), I got:
C:\Users\username\AppData\Local\Temp\ccIQwFEt.o:qag.f:(.text+0x1e0): undefined re
ference to `xerror_'
C:\Users\username\AppData\Local\Temp\cc6XR3D0.o:qage.f:(.text+0x83): undefined re
ference to `r1mach_'
(and a lot more of the same r1mach_ error)
It seems r1mach is a part of BLAS (and I don't know why it's not packaged in here but obtained as "auxiliary"), but what is xerror?
How do I properly compile this snippet in my environment, Win7 64bit (hopefully without Cygwin)?
Your help is very much appreciated.
xerror is an error reporting routine. Looking at the way it is called, it appears to use Hollerith constants (the ones where "foo" is written as 3hfoo).
if(ier.ne.0) call xerror(26habnormal return from qag ,
* 26,ier,lvl)
xerror in turn calls xerrwv, passing along the arguments (plus a few more).
This was definitely written before Fortran 77 became widespread.
Your best bet would be to use a compiler which still supports Hollerith constants, pull in all the dependencies (xeerwv has a few more, I don't know why you didn't get them from netlib) and run it through the compiler of your choice. Most compilers, including gfortran, support Hollerith; just ignore the warnings :-)
You will possibly need to modify one routine, that is xerprt. With gfortran, you could write this one as
subroutine xerprt(c,n)
character(len=1), dimension(n) :: c
write (*,'(500A)') c
end subroutine xerprt
and put this one into a separate file so that the compiler doesn't catch the rank violation (I know, I know...)

fortran compiler 4 vs 11

I am porting an application from fortran older version (4.0) to new version (11.0). While porting I am facing some problems with real*4 variables:
real*4 a,b,c
a=0.9876875
b=0.6754345
c=a*b
value for c in old compiler 0.667118, which is correct value. But with new compiler I am getting a small variation from the output(c variable) like 0.667120. Though it is a small variation, I am using these values in some other calculation. So the overall output has a huge difference. How to overcome this issue?
You are, I'm assuming, going from Microsoft Fortran Powerstation 4 to Intel Visual Fortran 11.xx ?
Anyways, try this:
program test32
integer, parameter :: iwp = selected_real_kind(15,300)
real(iwp) :: a,b,c
a=0.9876875
b=0.6754345
c=a*b
write(*,'(3f12.8)')a,b,c
end program test32
which gives out:
0.98768753 0.67543453 0.66711826
I will not explain selected_real_kind, not because I do not wish, but because the help will probably do it much better. But, do ask if something in here is not clear.
p.s. The representation of real*4, or any type of real is processor dependent, and compiler dependent and that is one of the reasons why you're getting different results.
Discussion of changes in Intel visual Fortran compiler version 11 here:
http://software.intel.com/en-us/forums/intel-visual-fortran-compiler-for-windows/topic/68587/
The upshot is that older versions of the Fortran compiler would implicitly promote single-precision values to double-precision before performing an operation. The default behavior in version 11 is to perform the operation with single-precision. There are compiler options (/arch:ia32) to enable the old behavior in the new compiler.
I assume you are using Intel. Try to compile without optimization..