Fortran floating point equality - fortran

I have a Fortran program that tests equality in two floating point numbers. It can be condensed to what is shown below. When this program is run with "0.1" given as a command line argument, I expect it to print "what I expected" but instead it prints "strange". I understand that this is probably due to a floating point rounding issue, but am hoping someone might be able to explain exactly how I should change inputvariable to make this code print "what I expected" with a command line argument of 0.1
program equalitytest
character(len=3) :: arg1
real*8 :: inputvariable
CALL GET_COMMAND_ARGUMENT(1,arg1)
READ(arg1,*) inputvariable
IF (inputvariable.EQ.0.1) THEN
PRINT*, "what I expected"
ELSE
PRINT*, "strange"
ENDIF
end program equalitytest
Run as follows:
./equalitytest 0.1
strange

As a general point, there should be very few reasons why one should need to compare equality with real numbers. If I ever find myself writing such code, I tend to pause and have a think about what I am trying to achieve. What real-world condition is actually a reflection of this?
The exception to the above relate to zeros, either when writing robust code which checks for and handles possible divisions by zero, or for cases searching for a convergent solution to an equation - in the latter case, this should be considered using a delta anyway.
If there really is a need for this check, why not outsource it to a standard library within the project, e.g.
module mylib
use iso_fortran_env
implicit none
private
public :: isReal4EqualReal4
public :: isReal4EqualReal8
public :: isReal8EqualReal4
public :: isReal8EqualReal8
real(real32), parameter :: delta4 = 0.001
real(real64), parameter :: delta8 = 0.0000000001
contains
logical function isReal4EqualReal4(lhs, rhs) result(equal)
real(real32), intent(in) :: lhs
real(real32), intent(in) :: rhs
equal = (abs(lhs - rhs) .le. delta4)
end function isReal4EqualReal4
logical function isReal4EqualReal8(lhs, rhs) result(equal)
real(real32), intent(in) :: lhs
real(real64), intent(in) :: rhs
equal = (abs(lhs - real(rhs,4)) .le. delta4)
end function isReal4EqualReal8
logical function isReal8EqualReal4(lhs, rhs) result(equal)
real(real64), intent(in) :: lhs
real(real32), intent(in) :: rhs
equal = isReal4EqualReal8(rhs, lhs)
end function isReal8EqualReal4
logical function isReal8EqualReal8(lhs, rhs) result(equal)
real(real64), intent(in) :: lhs
real(real64), intent(in) :: rhs
equal = (dabs(lhs - rhs) .le. delta8)
end function isReal8EqualReal8
end module mylib
EDIT: Forgot to add that one of the benefits of the above is the compiler will warn me if I'm attempting to compare two real numbers of different types while using the wrong interface
EDIT: Updated to use portable real number definitions.

Related

Should the use of the INTENT keyword speed the code up?

This question is based on an answer to the post Fortran intent(inout) versus omitting intent, namely the one by user Vladimyr, #Vladimyr.
He says that "<...> Fortran copies that data into a contiguous section of memory, and passes the new address to the routine. Upon returning, the data is copied back into its original location. By specifying INTENT, the compiler can know to skip one of the copying operations."
I did not know this at all, I thought Fortran passes by reference exactly as C.
The first question is, why would Fortran do so, what is the rationale behind this choice?
As a second point, I put this behaviour to the test. If I understood correctly, use of INTENT(IN) would save the time spent in copying back the data to th original location, as the compiler is sure the data has not been changed.
I tried this little piece of code
function funco(inp) result(j)
!! integer, dimension(:), intent (in) :: inp
integer, dimension(:):: inp
integer, dimension(SIZE(inp)) :: j ! output
j = 0.0 !! clear whole vector
N = size(inp)
DO i = 1, N
j(i) = inp(i)
END DO
end function
program main
implicit none
interface
function funco(inp) result(j)
!! integer, dimension(:), intent (in) :: inp
integer, dimension(:) :: inp
integer, dimension(SIZE(inp)) :: j ! output
end function
end interface
integer, dimension(3000) :: inp , j
!! integer, dimension(3000) :: funco
integer :: cr, cm , c1, c2, m
real :: rate, t1, t2
! Initialize the system_clock
CALL system_clock(count_rate=cr)
CALL system_clock(count_max=cm)
CALL CPU_TIME(t1)
rate = REAL(cr)
WRITE(*,*) "system_clock rate ",rate
inp = 2
DO m = 1,1000000
j = funco(inp) + 1
END DO
CALL SYSTEM_CLOCK(c2)
CALL CPU_TIME(t2)
WRITE(*,*) "system_clock : ",(c2 - c1)/rate
WRITE(*,*) "cpu_time : ",(t2-t1)
end program
The function copies an array, and in the main body this is repeated many times.
According to the claim above, the time spent in copying back the array should somehow show up.
system_clock rate 1000.00000
system_clock : 2068.07910
cpu_time : 9.70935345
but the results are pretty much the same independently from whether INTENT is use or not.
Could anybody share some light on these two points, why does Fortran performs an additional copy (which seems ineffective at first, efficiency-wise) instead of passing by reference, and does really INTENT save the time of a copying operation?
The answer you are referring to speaks about passing some specific type of subsection, not of the whole array. In that case a temporary copy might be necessary, depending on the function. Your function uses and assumed shape array and a temporary array will not be necessary even if you try quite hard.
An example of what the author (it wasn't me) might have had in mind is
module functions
implicit none
contains
function fun(a, n) result(res)
real :: res
! note the explicit shape !!!
integer, intent(in) :: n
real, intent(in) :: a(n, n)
integer :: i, j
do j = 1, n
do i = 1, n
res = res + a(i,j) *i + j
end do
end do
end function
end module
program main
use functions
implicit none
real, allocatable :: array(:,:)
real :: x, t1, t2
integer :: fulln
fulln = 400
allocate(array(1:fulln,1:fulln))
call random_number(array)
call cpu_time(t1)
x = fun(array(::2,::2),(fulln/2))
call cpu_time(t2)
print *,x
print *, t2-t1
end program
This program is somewhat faster with intent(in) when compared to intent(inout) in Gfortran (not so much in Intel). However, it is even much faster with an assumed shape array a(:,:). Then no copy is performed.
I am also getting some strange uninitialized accesses in gfortran when running without runtime checks. I do not understand why.
Of course this is a contrived example and there are real cases in production programs where array copies are made and then intent(in) can make a difference.

Passing a function to another with unknown arguments [duplicate]

This question already has answers here:
Passing external function of multiple variables as a function of one variable in Fortran
(2 answers)
Fortran minimization of a function with additional arguments
(2 answers)
Function with more arguments and integration
(1 answer)
Passing additional arguments in Newton’s method in Fortran
(2 answers)
Closed 1 year ago.
I have a function to compute Gaussian quadrature of a function $f(x)$ over the region $x \in [a,b]$. Here, $f(x)$ takes only one argument. I would want to know what a good practice would be to use gaussquad with a function which might take more arguments, for example getlaser below.
Laser is a derived type, and calling gaussquad(mylaser%getlaser, a, b) obviously does not work.
double precision function gaussquad(f, a, b) result(I)
implicit none
double precision :: f
double precision, intent(in) :: a, b
I = 2.d0*f(b-a)
end function
double precision function getlaser(this, t)
implicit none
class(Laser), intent(in) :: this
double precision, intent(in) :: t
getlaser = dsin(this%omega*t)
end function getlaser
The getlaser procedure has a user-defined dummy argument this which makes it impossible to define a general integration module.
In the following I will explain how to implement such a general integration module assuming standard data types.
One option would be to define an optional parameter array in gaussquad which can be passed through to the procedure f.
Following is a possible implementation for the integration module
! integ.f90
module integ_m
implicit none
private
public gaussquad
abstract interface
real function finter(x, p)
real, intent(in) :: x
real, optional, intent(in) :: p(:)
end function
end interface
contains
function gaussquad(f, a, b, p) result(int)
!! compute integral: int_a^b f(x; p) dx
procedure(finter) :: f
!! function to integrate
real, intent(in) :: a, b
!! integration bounds
real, optional, intent(in) :: p(:)
!! parameter array
real :: int
!! integral value
int = (b-a)*f(0.5*(a+b), p=p)
end function
end module
One would use it like in this program
! main.f90
program main
use integ_m, only: gaussquad
implicit none
print *, 'integrate x^2', gaussquad(parabola, 0.0, 1.0 )
print *, 'integrate laser (sin)', gaussquad(getlaser, 0.0, 1.0, [10.0])
contains
real function parabola(x, p)
real, intent(in) :: x
real, optional, intent(in) :: p(:)
if (present(p)) error stop "function doesnt use parameters"
parabola = x*x
end function
real function getlaser(t, p)
real, intent(in) :: t
real, optional, intent(in) :: p(:)
getlaser = sin(p(1)*t)
end function
end program
Compilation and running yields
$ gfortran -g -Wall -fcheck=all integ.f90 main.f90 && ./a.out
integrate x^2 0.250000000
integrate laser (sin) -0.958924294

array operation in fortran

I am writing a code with a lot of 2D arrays and manipulation of them. I would like the code to be as concise as possible, for that I would like to use as many 'implicit' operation on array as possible but I don't really know how to write them for 2D arrays.
For axample:
DO J=1,N
DO I=1,M
A(I,J)=B(J)*A(I,J)
ENDDO
ENDDO
become easily:
DO J=1,N
A(:,J)=B(J)*A(:,J)
ENDDO
Is there a way to reduce also the loop J?
Thanks
For brevity and clarity, you could wrap these operations in a derived type. I wrote a minimal example which is not so concise because I need to initialise the objects, but once this initialisation is done, manipulating your arrays becomes very concise and elegant.
I stored in arrays_module.f90 a derived type arrays2d_T which can hold the array coefficients, plus useful information (number of rows and columns). This type contains procedures for initialisation, and the operation you are trying to perform.
module arrays_module
implicit none
integer, parameter :: dp = kind(0.d0) !double precision definition
type :: arrays2d_T
real(kind=dp), allocatable :: dat(:,:)
integer :: nRow, nCol
contains
procedure :: kindOfMultiply => array_kindOfMuliply_vec
procedure :: init => initialize_with_an_allocatable
end type
contains
subroutine initialize_with_an_allocatable(self, source_dat, nRow, nCol)
class(arrays2d_t), intent(inOut) :: self
real(kind=dp), allocatable, intent(in) :: source_dat(:,:)
integer, intent(in) :: nRow, nCol
allocate (self%dat(nRow, nCol), source=source_dat)
self%nRow = nRow
self%nCol = nCol
end subroutine
subroutine array_kindOfMuliply_vec(self, vec)
class(arrays2d_t), intent(inOut) :: self
real(kind=dp), allocatable, intent(in) :: vec(:)
integer :: iRow, jCol
do jCol = 1, self%nCol
do iRow = 1, self%nRow
self%dat(iRow, jCol) = vec(jCol)*self%dat(iRow, jCol)
end do
end do
end subroutine
end module arrays_module
Then, in main.f90, I check the behaviour of this multiplication on a simple example:
program main
use arrays_module
implicit none
type(arrays2d_T) :: A
real(kind=dp), allocatable :: B(:)
! auxilliary variables that are only useful for initialization
real(kind=dp), allocatable :: Aux_array(:,:)
integer :: M = 3
integer :: N = 2
! initialise the 2d array
allocate(Aux_array(M,N))
Aux_array(:,1) = [2._dp, -1.4_dp, 0.3_dp]
Aux_array(:,2) = [4._dp, -3.4_dp, 2.3_dp]
call A%init(aux_array, M, N)
! initialise vector
allocate (B(N))
B = [0.3_dp, -2._dp]
! compute the product
call A%kindOfMultiply(B)
print *, A%dat(:,1)
print *, A%dat(:,2)
end program main
Compilation can be as simple as gfortran -c arrays_module.f90 && gfortran -c main.f90 && gfortran -o main.out main.o arrays_module.o
Once this object-oriented machinery exists, call A%kindOfMultiply(B) is much clearer than a FORALL approach (and much less error prone).
No one has mentioned do concurrent construct here, which has the potential to automatically parallelize and speed up your code,
do concurrent(j=1:n); A(:,j)=B(j)*A(:,j); end do
A one-line solution can be achieved by using FORALL:
FORALL(J=1:N) A(:,J) = B(J)*A(:,J)
Note that FORALL is deprecated in the most recent versions of the standard, but as far as I know, that is the only way you can perform that operation as a single line of code.

matmul with non-conforming matrices

I expected the intrinsic matmul to fail when multiplying non-conforming matrices. In this simple example (see a simple code below), I am multiplying a 4x3 matrix by 4x4 matrix using matmul. Interestingly the intel compiler does not issue any warning or fatal error message at either the run-time or compile time. I tried '-check all' flag and it did not catch this error, either. Does anyone have any thoughts on this?
P.S. gfortran does complain about this operation
program main
implicit none
interface
subroutine shouldFail(arrayInput, scalarOutput)
implicit none
real (8), intent(in) :: arrayInput(:,:)
real (8), intent(out) :: scalarOutput
end
end interface
real (8) :: scalarOutput, arrayInput(4, 3)
arrayInput(:,:) = 1.0
call shouldFail(arrayInput, scalarOutput)
write(*,*) scalarOutput
end program main
!#############################################
subroutine shouldFail(arrayInput, scalarOutput)
implicit none
real (8), intent(in) :: arrayInput(:,:)
real (8), intent(out) :: scalarOutput
real (8) :: jacobian(3, 4), derivative(4, 4)
derivative(:,:) = 1.0
jacobian = matmul(arrayInput, derivative)
scalarOutput = jacobian(1, 2)
end subroutine shouldFail
On Intel REAL(8) is consistent, so moving onto the question...
The answer is here: software.intel.com/en-us/node/693211
If array input has shape (n, m) and derivative has shape (m, k), the result is a rank-two array (Jacobite) with shape (n, k)...
Whether one needs/want to add in USE ISO_C_BINDING and then using REAL(KIND=C_FLOAT) can be worthwhile for some conceptual future where it need to be portable... (Usually after the MATMUL is convincing one that it works).
It could look like this:
subroutine shouldFail(arrayInput, scalarOutput)
USE ISO_C_BINDING
implicit none
real(KIND=C_DOUBLE), DIMENSION(:,:), intent(in ) :: arrayInput
real(KIND=C_DOUBLE) , intent( out) :: scalarOutput
real(KIND=C_DOUBLE), DIMENSION(3,4) :: Jacobian
real(KIND=C_DOUBLE), DIMENSION(4,4) :: derivative
(Here: You may want to consider checking the rank/shape before the MATMUL call if you are concerned.)

Best way to implement bitwise operations for n-byte integers

I want to implement an efficient library for bitwise operations on big integers. I've written the following function that overrides BTEST:
FUNCTION testb_i2b(n,i)
INTEGER(I8B), DIMENSION(0:), INTENT(IN) :: n
INTEGER(I2B), INTENT(IN) :: i
INTEGER(I2B) :: j
LOGICAL :: testb_i2b
j = ISHFT(i,-6)
IF ( j .LE. UBOUND(n,1) ) THEN
testb_i2b = BTEST(n(j),i-ISHFT(j,6))
ELSE
testb_i2b = .FALSE.
END IF
END FUNCTION testb_i2b
The array n contains the 64*(SIZE(n)-1) bits of my big integer. Is there a more efficient way to obtain the same functionality?
I don't know whether this is faster than your version, I'll leave you to test that, but it involves fewer operations and no explicit if statement so might be. It gives the same results as your code for the few tests I've run. I've hard-wired the size of the integers in the bignum at 64 bits, you could make that a parameter if you wanted to.
LOGICAL FUNCTION btest_bignum(bn,ix)
IMPLICIT NONE
INTEGER(int64), DIMENSION(0:), INTENT(in) :: bn
INTEGER(int16), INTENT(in) :: ix
INTEGER :: array_ix
array_ix = ix/64
btest_bignum = BTEST(bn(array_ix), ix-(array_ix*64))
END FUNCTION btest_bignum
Note that I've used the now-standard kind declarations int64 and int16