Incorrect eigenvalues with cgeev - fortran

I have written the following code to get eigenvalues and determinant of the matrix in fortran with the use of cgeev:
SUBROUTINE CDETS(CS,CW,CDET,N)
IMPLICIT REAL*8 (A,B,D-H,O-Z)
IMPLICIT COMPLEX*16 (C)
DIMENSION CW(*),CS(N,*)
ALLOCATABLE :: CWK(:), WK(:), CWL(:,:),CWR(:,:)
ALLOCATE (WK(2*N),CWK(10*N),CWL(N,N),CWR(N,N))
CALL CGEEV('N','N',N,CS,N,CW,CWL,N,CWR,N,CWK,10*N,
& WK,INFO)
DEALLOCATE (WK,CWK,CWL,CWR)
CDET = 1.D0
DO i=1,N
CDET = CDET*CW(i)
ENDDO
END SUBROUTINE
And this simple program to check it:
PROGRAM TESTDET
IMPLICIT REAL*8 (A,B,D-H,O-Z)
IMPLICIT COMPLEX*16 (C)
DIMENSION :: CS(2,2), CW(2)
CS(1,1)=(1.D0,1.D0)
CS(1,2)=1.D0
CS(2,1)=0.D0
CS(2,2)=1.D0
CALL CDETS(CS,CW,CDET,2)
PRINT *, CW(1)
PRINT *, CW(2)
END
And I get the following quite confusing results:
( 0.0000000000000000 , 1.0000000000000000 )
( 1.2828908559913808E-319, 7.6130689002776223E-317)
What is the matter here?

You are using the wrong procedure. The argument types between cgeev and zgeev are the same but differ in kinds. For zgeev, the array RWORK is of type and kind of double precision and the arrays A, VL, VR, W, and WORK are complex*16. For cgeev the types and kinds, respectively are real and complex.
You are implicitly typing reals as real*8, which are double precision in a non-portable fashion. Likewise your complex variables are typed as complex*16. Your variables match the argument list of zgeev, which is why it works for you and cgeev doesn't. To use cgeev change your implicit types to real and complex and you'll find cgeev now works and zgeev does not.
From the BLAS naming conventions:
Every routine in the BLAS library comes in four flavors, each prefixed by the letters S, D, C, and Z, respectively. Each letter indicates the format of input data:
S stands for single-precision (32-bit IEEE floating point numbers),
D stands for double-precision (64-bit IEEE floating point numbers),
C stands for complex numbers (represented by pairs of 32-bit IEEE floating point numbers),
Z stands for double complex numbers (represented by pairs of 64-bit IEEE floating point numbers)
For your double precision complex variables, you'll need to use the Z variants of functions rather than C.

I found a way to fix this, but cgeev seems to be broken. If you replace cgeev with another lapack routine zgeev it works perfectly. Most pleasant part is I had to change only one letter.

Related

Why do I keep getting 0 as output? [duplicate]

This question already has an answer here:
Why are the elements of an array formatted as zeros when they are multiplied by 1/2 or 1/3?
(1 answer)
Closed 5 years ago.
Why does this fortran program produce only zeros? When I print it out i get -0.00000 everywhere! What have I done wrong? In matlab it runs perfectly. I dont see any reason why its not working to be honest!
It seems like its the fraction that messes it up. if I set x equal to some decimal number it works.
program main
implicit none
integer iMax, jMax
double precision, dimension(:,:), allocatable :: T
double precision x, dx,f,L2old,L2norm,y
integer i, j,n,bc
n=10
allocate(T(1:n+2, 1:n+2))
T=0.0d0
do i=2,n+1
do j=2,n+1
x=(j+1)*1/24
y=(i+1)*1/24
T(i,j)= -18*(x**2+y**2)**2
Write(*,*)'T(',i,'',j,'', T(i,j)
end do
end do
Write(*,*)'T(1,1)',T(1,1)
end program main
x=(j+1)*1/24
1/24 is an integer division that rounds down to 0. You should be able to force floating point division by making at least one of the operands floating point,
e.g.
x=(j+1)*1.0/24.0
As was indicated by Jim Lewis, the answer to the OP's question was indeed the integer division used.
Nonehteless, I think it is important to point out that one should take care of how the floating point fraction is written down. As the OP's program shows, x was of type DOUBLE PRECISION. Then the correct result should be
x=(j+1)*1.0D0/24.0D0
The difference here is that now you ensure that the division happens with the same precision as x was declared.
To following program demonstrates the problem ::
program test
WRITE(*,'(A43)') "0.0416666666666666666666666666666666..."
WRITE(*,'(F40.34)') 1/24
WRITE(*,'(F40.34)') 1.0/24.0
WRITE(*,'(F40.34)') 1.0D0/24.0
WRITE(*,'(F40.34)') 1.0D0/24.0D0
end program test
which as the output
0.0416666666666666666666666666666666...
0.0000000000000000000000000000000000
0.0416666679084300994873046875000000
0.0416666666666666643537020320309239
0.0416666666666666643537020320309239
You clearly see the differences. The first line is the mathematical correct result. The second line is the integer division leading to zero. The third line, shows the output in case the division is computed as REAL while the fourth and fifth line are in DOUBLE PRECISION. Please take into account that in my case REAL implies a 32bit floating point number and DOUBLE PRECISION a 64 bit version. The precision and representation of both REAL and DOUBLE PRECISION is compiler dependent and not defined in the Standard. It only requires that DOUBLE PRECISION has a higher precision than REAL.
4.4.2.3 Real type
1 The real type has values that approximate the mathematical real numbers. The processor shall provide two or more approximation methods that define sets of values for data of type real. Each such method has a representation method and is characterized by a value for the kind type parameter KIND. The kind type parameter of an approximation method is returned by the intrinsic function KIND (13.7.89).
5 If the type keyword REAL is used without a kind type parameter, the
real type with default real kind is specified and the kind value is
KIND (0.0). The type specifier DOUBLE PRECISION specifies type real
with double precision kind; the kind value is KIND (0.0D0). The
decimal precision of the double precision real approximation method
shall be greater than that of the default real method.
This actually implies that, if you want to ensure that your computations are done using 32bit, 64bit or 128bit floating point representations, you are advised to use the correct KIND values as defined in the intrinsic module ISO_FORTRAN_ENV.
13.8.2.21 REAL32, REAL64, and REAL128
1 The values of these default integer scalar named constants shall be
those of the kind type parameters that specify a REAL type whose
storage size expressed in bits is 32, 64, and 128 respectively. If,
for any of these constants, the processor supports more than one kind
of that size, it is processor dependent which kind value is provided.
If the processor supports no kind of a particular size, that constant
shall be equal to −2 if the processor supports kinds of a larger size
and −1 otherwise.
So this would lead to the following code
PROGRAM main
USE iso_fortran_env, ONLY : DP => REAL64
IMPLICIT NONE
...
REAL(DP) :: x
...
x = (j+1)*1.0_DP/24.0_DP
...
END PROGRAM main

Very small numbers in Fortran

Is it possible to achieve very small numbers in Fortran like 1E-1200? I know how to do it in python, but my code for master's thesis runs too slow. My supervisor recommends me to use Fortran, but I am not sure if this is a problem.
The previous answers suggesting use of REAL128 from ISO_FORTRAN_ENV describe a non-portable solution. What you would get here is a real type whose representation is 128 bits, but that says nothing about the range or precision of the type! For example, some IBM systems have a 128-bit real kind that is actually two doubles with an offset exponent. This gets you more precision but not significantly more range.
The proper way to do this is to use the SELECTED_REAL_KIND intrinsic function to determine the implementation's kind that supports the desired range. For example:
integer, parameter :: bigreal = SELECTED_REAL_KIND(R=1200)
real(KIND=bigreal) :: x
If the implementation doesn't have a real kind that can represent a value with a decimal exponent of plus or minus 1200, you'll get an error, otherwise you'll get the smallest appropriate kind.
You could also specify a P= value in the call to indicate what minimum precision (in decimal digits) you need.
The short answer is yes.
Modern compilers typically support so-called quad precision, a 128-bit real. A portable way to access this type is using the ISO_FORTRAN_ENV. Here is a sample program to show how big and small these numbers get:
program main
use ISO_FORTRAN_ENV, only : REAL32, REAL64, REAL128
! -- tiny and huge grab the smallest and largest
! -- representable number of each type
write(*,*) 'Range for REAL32: ', tiny(1._REAL32), huge(1._REAL32)
write(*,*) 'Range for REAL62: ', tiny(1._REAL64), huge(1._REAL64)
write(*,*) 'Range for REAL128: ', tiny(1._REAL128), huge(1._REAL128)
end program main
The types REAL32, REAL64, and REAL128 are typically known as single, double, and quad precision. Longer types have a larger range of representable numbers and greater precision.
On my machine with gfortran 4.8, I get:
mach5% gfortran types.f90 && ./a.out
Range for REAL32: 1.17549435E-38 3.40282347E+38
Range for REAL62: 2.2250738585072014E-308 1.7976931348623157E+308
Range for REAL128: 3.36210314311209350626E-4932 1.18973149535723176502E+4932
As you can see, quad precision can represent numbers as small as 3.4E-4932.
Most Fortran compilers support REAL128 data format which matches IEEE binary128. Some support also REAL80 with similar range, matching C long double. These don't have performance of REAL64 but should be much faster than python.

Defining constants of sufficient precision in Fortran [duplicate]

I have the following Fortran code:
Program Strange
Real(Kind=8)::Pi1=3.1415926535897932384626433832795028841971693993751058209;
Real(Kind=8)::Pi2=3.1415926535897932384626433832795028841971693993751058209_8;
Print*, "Pi1=", Pi1;
Print*, "Pi2=", Pi2;
End Program Strange
I compile with gfortran, and the output is:
Pi1= 3.1415927410125732
Pi2= 3.1415926535897931
Of course the second is correct, but should this be the case? It seems like Pi1 is being input to memory as a single precision number, and then put into a double precision memory slot. But this seems like an error to me. Am I correct?
I do know a bit of Fortran ! #Dougal's answer is correct though the snippet he quotes from is not, embedding the letter d into a real literal constant is not required (since Fortran 90), indeed many Fortran programmers now regard that approach as archaic. The snippet is also misleading in advising the use of 3.1415926535d+0 to initialise a 64-bit floating-point value for pi, it doesn't set enough of the digits to their correct values.
The statement:
Real(Kind=8)::Pi1=3.1415926535897932384626433832795028841971693993751058209
defines Pi1 to be a real variable of kind 8. The literal real value 3.1415926535897932384626433832795028841971693993751058209 is, however, a real value of default kind, most likely to be a 4-byte real on most current compilers. That seems to explain your output but do check your documentation.
On the other hand, the literal real value Pi2=3.1415926535897932384626433832795028841971693993751058209_8 is, by the suffixing of the kind specification, declared to be of kind=8 which is the same as the kind of the variable it is assigned to.
Three more points:
1) Don't fall into the trap of thinking that kind=8 means the same thing as 64-bit floating-point number or double. For many compilers it does, for some it doesn't. Kind numbers are not portable between Fortran implementations. They are, according to the standard, arbitrary positive integers. Better, with a modern compiler, would be to use the predefined constants from the intrinsic module iso_fortran_env, e.g.
use, intrinsic :: iso_fortran_env
...
real(real64) :: pi = 3.14159265358979323846264338_real64
There are other portable approaches to setting variable kinds using functions such as selected_real_kind.
2) Since the value of pi is unlikely to change during the execution of your program you might care to make it a parameter thus:
real(real64), parameter :: pi = 3.14159265358979323846264338_real64
3) It isn't necessary (or usual) to end Fortran statements with a ';' unless you want to have more than one statement on the same line in the source file.
I don't really know Fortran, but this page says:
The letter "d" must be embedded in the literal, otherwise, the compiler's pre-processor would round it off to be a Single Precision literal. For example, 3.1415926535 would be read as 3.141593 while 3.1415926535d+0 would be stored with all the digits intact. The letter "d" for double precision numbers has the same meaning as "e" for single precision numbers.
So it seems like your guess is correct.

real(8) resolution, data overflow

I have the following code which does not compile using gfortran:
program test_overflow
real(8) a,b
b=0.d0
a=1e39
write(*,*) a*b
end program
The error output from gfortran is
test.f90:4.14:
a=1e39
1
Error: Real constant overflows its kind at (1)
I wonder what is the issue here. As far as I remember, real(8) should give a double precision range of 10 to the power of -100 to +100 (approximately), am I wrong about this?
Use a=1d39 instead of a=1e39.
e denotes single precision.
d denotes double precision.
You may want to refer to the documentation for Double Precision Constants.
The literal
1e139
is a default real. It doesn't matter what is on the left-hand side of the assignment, Fortran always evaluates expressions without the surrounding context. The value 10^139 is too large for the default real in your processor.
real(8) may or may not give you the double precision. Often it does, but it is compiler specific. If you insist on the kind 8, although I won't recommend it, you get floating point literals of the same kind by using the suffix
1e139_8
It is better to place the value 8 to a named constant
integer, parameter :: wp = 8
(though it is better to use other methods, than the magic number 8)
and then you can do
real(wp) :: x
and
1e139_wp
There is an odl method, which distinguishes just default (sometimes called single precision) and double precision reals. In that case you declare your variables as
double precision :: x
and literals as
1d139
Mixing the two methods (real(8) and 1d139) is strange.

How to calculate numbers to arbitrarily high precision?

I wrote a simple fortran program to compute Gauss's constant :
program main
implicit none
integer :: i, nit
double precision :: u0, v0, ut, vt
nit=60
u0=1.d0
v0=sqrt(2.d0)
print *,1.d0/u0,1.d0/v0
do i=1,nit
ut=sqrt(u0*v0)
vt=(u0+v0)/2.d0
u0=ut
v0=vt
print *,1.d0/u0,1.d0/v0
enddo
end program main
Result is 0.83462684167407308 after 4 iterations. Anyway to have better results using the arithmetico-geometric mean method? How do people compute many digits for numbers such as pi, Euler's constant, and so on ? Does each irrational number has a specific algorithm?
If you goal is to insert a constant value into your program, the easiest solution is to look up the value on the web in or a book. Be sure to add a type specification to the numeric value, other Fortran will treat it as the default of single precision. One could write pi as pi_quad = 3.14159265358979323846264338327950288_real128 -- showing the use of a type specifier on a constant.
If you want to do high precision calculations, you could some high precision type available in your compiler. Many compilers now have quadruple precision. If they have the Fortran 2008 version of the ISO_FORTRAN_ENV module, you can request this via the type real128.
Arbitrary precision (user specified number of digits, to very high number of digits) is outside the language and is available in libraries, e.g., MPFUN90, http://crd-legacy.lbl.gov/~dhbailey/mpdist/
Yes, different constants have various algorithms. This is a very large topic.
Solution for pi:
pi = 4.0d0 * datan(1.0d0)