open(10,file='datad.dat',status='old')
do i=1,1484
read(10,99)d(i,:)
99 format(10(e16.12))
end do
This is my code to read datad.dat.File is huge,will post just first row:
2.1762368e+13 0.0 0.0 0.0 1.0123726e-01 1.7723948e+149 1.0671934e+06 1.5929603e+104 4.3220965e+48 7.2446595e+16
But when I execute code I got:
2.17623686E+13 0.00000000 0.00000000 0.00000000 0.101237260 Infinity 1067193.38 Infinity Infinity 7.24465978E+16
I have compiled FORTRAN code with gfortran.Why do I have infinity,is there limitation regarding the exponent?How can I check this?
It really depends on how you've declared d and which compiler you're using.
On GFortran, the limit is HUGE(0.0E0) for reals and HUGE(0D0) for double precision. This roughly comes up to 1E38 for real and 1D308 for DP.
At a guess, you've declared d as real so anything over 1E38 would be infinity.
Related
Is there a way to combine NaN and ordinary numbers in a different way then usually done in Fortran?
I have several summations which contains 'safe' terms, which cannot be NaN, and some other terms which can be NaN.
I would like the evaluation of the expression to neglect the addends in case they are NaN.
I cannot just get rid of them multiplying them times a null factor when they are NaN as NaN x 0 gives NaN anyway.
Ideas?
Thanks
There is no arithmetic operation that does not propagate NaN. So ideas like multiplying by 0 will not work.
Your only solution is to miss out the NaN terms in the sum. Do that with something based on
IF (IEEE_IS_NAN(x))
If you are not using IEEE754 or are using an older standard of FORTRAN, then you can use
IF(x .NE. x)
which will be TRUE if and only if x is NaN.
In the code below I am adding together 865398.78 and -865398.78. I expect to get 0, but instead I get -0.03.
Source Code:
program main
real(8) :: x
open(10,file="test.txt")
read(10,*)x
print *,"x=",x
x=x+865398.78
print *,"x+865398.78=",x
end program
Result:
x= -865398.780000000
x+865398.78= -3.000000002793968E-002
Am I wrong with the usage of "read" codes or something else?
The number 865398.78 is represented in single precision in your code. Single precision can handle about 7 significant digits, while your number has 8. You can make it double precision by writing
x=x+865398.78_8
I will make one big assumption in this answer: that real(8) corresponds to double precision.
You are probably assuming that your 865398.78 means the same thing wherever it occurs. In source code that is true: it is a default real literal constant which approximates 865398.78.
When you have
x=x+865398.78
for x double precision, then the default real constant is converted to a double precision value.
However, in the read statement
read(10,*)x
given input "-865398.78" then x takes a double precision approximation to that value.
Your non-zero answer comes from the fact that a default real/single precision approximation converted to a double precision value is not in general, and isn't in this case, the same thing as an initial double precision approximation.
This last fact is explained in more detail in other questions. As is the solution to use x=x+865398.78_8 (or better, don't use 8 as the kind value).
I wrote a Fortran code to solve PDE (like continuity equation) but the initial
value of unknown are in order of 1.0e20 this imply my code to give NANE (not number) of infinity because it's multiplying or dividing big number
what can I do to run simulation with such big number?
the equation are : Poisson equation and continuity like equations
You can use extended precision, real*8, or double precision (which are a 64-bit floating point representations) as the type instead of real (which is 32 bits). That will give an exponent range of at least 308 instead of the smaller range of 38.
I came across something rather interesting while I am playing around with the math module for trigonometric calculations using tan, sin, and cos.
As stated is all math textbooks, online source, and courses, the following is true:
tan(x) = sin(x) / cos(x)
Although I came across some precision errors while using the three trig functions with the following:
from math import tan, sin, cos
theta = -30
alpha = tan(theta)
omega = sin(theta) / cos(theta)
print(alpha, omega)
print(alpha == omega)
>>> (6.405331196646276, 6.4053311966462765)
>>> (False)
I have tried a couple of different values for theta and the last digit of the results has been off by a tiny bit.
Is there something that I am missing?
This issue is because of the finite floating point precision (not all real numbers can be represented exactly and not all calculations with them are precise). An accessible guide is in the Python docs.
Using the default, "double precision" floating point representation, you can never hope for better than about 15 decimal place precision and calculations involving such numbers will tend to degrade this precision (the rounding error refered to in the above comment). In the same way, you get False from the following:
In [1]: 0.01 == (0.1)**2
Out[1]: False
because the Python isn't squaring 0.1 but the "nearest representable number" to 0.1, which is neither 0.01 nor the nearest representable number to 0.01.
D Stanley has given the correct way to test for "equality" within some absolute tolerance: (abs(a-b) < tol) where tol is some small number you choose to fit your expected precision.
As you have discovered, there is a level of imprecision when comparing floating point numbers. A common way to test for "equality" is to determine a reasonable amount of difference you want to accept (commonly called "epsilon") an compare the difference between the two numbers against that maximum error:
epsilon = 1E-14
print(alpha, omega)
print(alpha == omega)
print(abs(alpha - omega) < epsilon)
First you should notice that the arguments of trigonometric functions are given in arc length, not in degree. Thus theta=-30 refers to an angle of -30*180/pi in degrees.
Second, the processor, and thus the calling math library, has separate internal procedures for the computation of tan and (sin, cos). The extra division operation loses 1/2 to 1 bit of precision, which explains the difference in results.
I try to make the following division: 1/16777216, what is equal to 5.96046448e-8
but this:
printf("number: %f \n", 1.0f / 16777216.0f);
allways gives me 0.00000 instead of the answer I would expect.
I looked up the ranges, because I thought well, that might be a problem that float is simply to smal
to handle such a number, but IEEE 754 states it to be ±1.18×10−38.
Am I missing something and thats why the result not the expected one?
When using fixed formatting (%f) you get a format with a decimal point and up to 6 digits. Since the value you used rounds to a value smaller than 0.000001 it seems reasonable to have 0.000000 printed. You can either use more digits (I think using %.10f but I'm not that good at <stdio.h> format specifiers) or you change the format to use either scientific notation (%e) or the "better" of both options (%g).