gfortran REAL not accurate to 8 decimal places [duplicate] - fortran

This question already exists:
gfortran represents REAL incorrectly [duplicate]
Closed 8 years ago.
This question has not been previously answered. I am trying to represent a real or any number for that matter in Fortran correctly. What gfortran is doing for me is way off. For example when I declare the variable REAL pi=3.14159 fortran prints pi = 3.14159012 rather than say 3.14159000. See below:
PROGRAM Test
IMPLICIT NONE
REAL:: pi = 3.14159
PRINT *, "PI = ",pi
END PROGRAM Test
This prints:
PI = 3.14159012
I might have expected something like PI = 3.14159000 as a REAL is supposed to be accurate to at least 8 decimal places.

I'm in a good mood, so I'll try to answer this question, which is basic knowledge which can be easily googled (as already pointed out in the comments to this and your former question).
Luckily, Fortran provides some really interesting intrinsics to get some understanding of floating point numbers.
The 8 digits, you are talking about, are a rule of thumb and can be related to the function EPSILON(x), which prints the smallest deviation from 1, which can be represented within the chosen model (e.g. REAL4). This value is actually 1.19e-7 which means, that your 8th digit is most likely wrong. I write most likely, because some numbers can be represented exactly.
In the case of PI, the smallest representable deviation can be printed using the intrinsic SPACING(PI). This shows a value of 2.38e-7, which is slightly larger than the epsilon and still allows for 7 correct digits.
Now, why does your value of PI get stored as 3.14159012? When you store a floating point number, you always store the nearest representable number.
Using the value of spacing, we can get the possible values for your pi. Possible numbers and their differences to your value of 3.14159 are:
3.14158988 1.20E-007
3.14159012 -1.18E-007
3.14159036 -3.56E-007
As you can see, 3.14159012 is the nearest possible value to 3.14159 and is thus stored and printed.

It is common for the last two digit to be erroneous. It is called floating point error.
Check this:
Week 1 - Lecture 2: Binary storage and version control / Fixed and floating point real numbers (9-08).mp4
#
https://class.coursera.org/scicomp-002/lecture

Related

How make a good calculator with floating point arithmetic [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
Here is how my calculator should work:
There is a JSON value where I can write the first multiplier - something like this:
{
"value1": 1.4
}
On the calculator I can write the second multiplier - only 10^n numbers (10, 100, ..., 10000000). And my calc should return me an integer, as I know that always people who use my calc with write less numbers after the decimal point for the first multiplier than we have 0s on the calc for the second multiplier. Yes, my calc is a very-very strange one.
Here are valid inputs:
v1=1.4; v2=100;
v1=1.414; v2=100000;
v1=1.1; v2=100;
What happens when I do this, for example for value1=1.4 and value2=10000 I get 13900. As far as float cannot hold any number sometimes it stores different numbers. For 1.4 internally it stores 1.399999 on my machine. I know why, but you know the QA engineer who tests my app tells me that I need to get 14000. Your calc does not work. How to make my calc so that I will print correct number?
P.S. Of course I have cut out my real problem from the context but the thing is that I have a float in a file and a 10^n number in my program as a user input. How to get correct result?
EDIT1: I don't ask why float works that way. I know why. I ask how to solve the problem even when float works that way.
EDIT2: I use RapidJson to read the JSON file which already returns me wrong number as a double precision number. I can't use libraries that provide with higher precision floating points.
Round the result when you format it for display. A double precision value is correct to about 15 significant digits, so if you round the result to 12 significant digits you're not going to surprise the user.

Fortran - want to round to one decimal point

In fortran I have to round latitude and longitude to one digit after decimal point.
I am using gfortran compiler and the nint function but the following does not work:
print *, nint( 1.40 * 10. ) / 10. ! prints 1.39999998
print *, nint( 1.49 * 10. ) / 10. ! prints 1.50000000
Looking for both general and specific solutions here. For example:
How can we display numbers rounded to one decimal place?
How can we store such rounded numbers in fortran. It's not possible in a float variable, but are there other ways?
How can we write such numbers to NetCDF?
How can we write such numbers to a CSV or text file?
As others have said, the issue is the use of floating point representation in the NetCDF file. Using nco utilities, you can change the latitude/longitude to short integers with scale_factor and add_offset. Like this:
ncap2 -s 'latitude=pack(latitude, 0.1, 0); longitude=pack(longitude, 0.1, 0);' old.nc new.nc
There is no way to do what you are asking. The underlying problem is that the rounded values you desire are not necessarily able to be represented using floating point.
For example, if you had a value 10.58, this is represented exactly as 1.3225000 x 2^3 = 10.580000 in IEEE754 float32.
When you round this to value to one decimal point (however you choose to do so), the result would be 10.6, however 10.6 does not have an exact representation. The nearest representation is 1.3249999 x 2^3 = 10.599999 in float32. So no matter how you deal with the rounding, there is no way to store 10.6 exactly in a float32 value, and no way to write it as a floating point value into a netCDF file.
YES, IT CAN BE DONE! The "accepted" answer above is correct in its limited range, but is wrong about what you can actually accomplish in Fortran (or various other HGL's).
The only question is what price are you willing to pay, if the something like a Write with F(6.1) fails?
From one perspective, your problem is a particularly trivial variation on the subject of "Arbitrary Precision" computing. How do you imagine cryptography is handled when you need to store, manipulate, and perform "math" with, say, 1024 bit numbers, with exact precision?
A simple strategy in this case would be to separate each number into its constituent "LHSofD" (Left Hand Side of Decimal), and "RHSofD" values. For example, you might have an RLon(i,j) = 105.591, and would like to print 105.6 (or any manner of rounding) to your netCDF (or any normal) file. Split this into RLonLHS(i,j) = 105, and RLonRHS(i,j) = 591.
... at this point you have choices that increase generality, but at some expense. To save "money" the RHS might be retained as 0.591 (but loose generality if you need to do fancier things).
For simplicity, assume the "cheap and cheerful" second strategy.
The LHS is easy (Int()).
Now, for the RHS, multiply by 10 (if, you wish to round to 1 DEC), e.g. to arrive at RLonRHS(i,j) = 5.91, and then apply Fortran "round to nearest Int" NInt() intrinsic ... leaving you with RLonRHS(i,j) = 6.0.
... and Bob's your uncle:
Now you print the LHS and RHS to your netCDF using a suitable Write statement concatenating the "duals", and will created an EXACT representation as per the required objectives in the OP.
... of course later reading-in those values returns to the same issues as illustrated above, unless the read-in also is ArbPrec aware.
... we wrote our own ArbPrec lib, but there are several about, also in VBA and other HGL's ... but be warned a full ArbPrec bit of machinery is a non-trivial matter ... lucky you problem is so simple.
There are several aspects one can consider in relation to "rounding to one decimal place". These relate to: internal storage and manipulation; display and interchange.
Display and interchange
The simplest aspects cover how we report stored value, regardless of the internal representation used. As covered in depth in other answers and elsewhere we can use a numeric edit descriptor with a single fractional digit:
print '(F0.1,2X,F0.1)', 10.3, 10.17
end
How the output is rounded is a changeable mode:
print '(RU,F0.1,2X,RD,F0.1)', 10.17, 10.17
end
In this example we've chosen to round up and then down, but we could also round to zero or round to nearest (or let the compiler choose for us).
For any formatted output, whether to screen or file, such edit descriptors are available. A G edit descriptor, such as one may use to write CSV files, will also do this rounding.
For unformatted output this concept of rounding is not applicable as the internal representation is referenced. Equally for an interchange format such as NetCDF and HDF5 we do not have this rounding.
For NetCDF your attribute convention may specify something like FORTRAN_format which gives an appropriate format for ultimate display of the (default) real, non-rounded, variable .
Internal storage
Other answers and the question itself mention the impossibility of accurately representing (and working with) decimal digits. However, nothing in the Fortran language requires this to be impossible:
integer, parameter :: rk = SELECTED_REAL_KIND(radix=10)
real(rk) x
x = 0.1_rk
print *, x
end
is a Fortran program which has a radix-10 variable and literal constant. See also IEEE_SELECTED_REAL_KIND(radix=10).
Now, you are exceptionally likely to see that selected_real_kind(radix=10) gives you the value -5, but if you want something positive that can be used as a type parameter you just need to find someone offering you such a system.
If you aren't able to find such a thing then you will need to work accounting for errors. There are two parts to consider here.
The intrinsic real numerical types in Fortran are floating point ones. To use a fixed point numeric type, or a system like binary-coded decimal, you will need to resort to non-intrinsic types. Such a topic is beyond the scope of this answer, but pointers are made in that direction by DrOli.
These efforts will not be computationally/programmer-time cheap. You will also need to take care of managing these types in your output and interchange.
Depending on the requirements of your work, you may find simply scaling by (powers of) ten and working on integers suits. In such cases, you will also want to find the corresponding NetCDF attribute in your convention, such as scale_factor.
Relating to our internal representation concerns we have similar rounding issues to output. For example, if my input data has a longitude of 10.17... but I want to round it in my internal representation to (the nearest representable value to) a single decimal digit (say 10.2/10.1999998) and then work through with that, how do I manage that?
We've seen how nint(10.17*10)/10. gives us this, but we've also learned something about how numeric edit descriptors do this nicely for output, including controlling the rounding mode:
character(10) :: intermediate
real :: rounded
write(intermediate, '(RN,F0.1)') 10.17
read(intermediate, *) rounded
print *, rounded ! This may look not "exact"
end
We can track the accumulation of errors here if this is desired.
The `round_x = nint(x*10d0)/10d0' operator rounds x (for abs(x) < 2**31/10, for large numbers use dnint()) and assigns the rounded value to the round_x variable for further calculations.
As mentioned in the answers above, not all numbers with one significant digit after the decimal point have an exact representation, for example, 0.3 does not.
print *, 0.3d0
Output:
0.29999999999999999
To output a rounded value to a file, to the screen, or to convert it to a string with a single significant digit after the decimal point, use edit descriptor 'Fw.1' (w - width w characters, 0 - variable width). For example:
print '(5(1x, f0.1))', 1.30, 1.31, 1.35, 1.39, 345.46
Output:
1.3 1.3 1.4 1.4 345.5
#JohnE, using 'G10.2' is incorrect, it rounds the result to two significant digits, not to one digit after the decimal point. Eg:
print '(g10.2)', 345.46
Output:
0.35E+03
P.S.
For NetCDF, rounding should be handled by NetCDF viewer, however, you can output variables as NC_STRING type:
write(NetCDF_out_string, '(F0.1)') 1.49
Or, alternatively, get "beautiful" NC_FLOAT/NC_DOUBLE numbers:
beautiful_float_x = nint(x*10.)/10. + epsilon(1.)*nint(x*10.)/10./2.
beautiful_double_x = dnint(x*10d0)/10d0 + epsilon(1d0)*dnint(x*10d0)/10d0/2d0
P.P.S. #JohnE
The preferred solution is not to round intermediate results in memory or in files. Rounding is performed only when the final output of human-readable data is issued;
Use print with edit descriptor ‘Fw.1’, see above;
There are no simple and reliable ways to accurately store rounded numbers (numbers with a decimal fixed point):
2.1. Theoretically, some Fortran implementations can support decimal arithmetic, but I am not aware of implementations that in which ‘selected_real_kind(4, 4, 10)’ returns a value other than -5;
2.2. It is possible to store rounded numbers as strings;
2.3. You can use the Fortran binding of GIMP library. Functions with the mpq_ prefix are designed to work with rational numbers;
There are no simple and reliable ways to write rounded numbers in a netCDF file while preserving their properties for the reader of this file:
3.1. netCDF supports 'Packed Data Values‘, i.e. you can set an integer type with the attributes’ scale_factor‘,’ add_offset' and save arrays of integers. But, in the file ‘scale_factor’ will be stored as a floating number of single or double precision, i.e. the value will differ from 0.1. Accordingly, when reading, when calculating by the netCDF library unpacked_data_value = packed_data_value*scale_factor + add_offset, there will be a rounding error. (You can set scale_factor=0.1*(1.+epsilon(1.)) or scale_factor=0.1d0*(1d0+epsilon(1d0)) to exclude a large number of digits '9'.);
3.2. There are C_format and FORTRAN_format attributes. But it is quite difficult to predict which reader will use which attribute and whether they will use them at all;
3.3. You can store rounded numbers as strings or user-defined types;
Use write() with edit descriptor ‘Fw.1’, see above.

Dividing two floats doesn't give exact result [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 9 years ago.
I had divided 9501/100.0f expecting to get result of 95.01f, but for some deviant reason the result was 95.01000000002f.
I am aware of rounding errors and also that dividing two bigger floats can give improper result, but these two numbers are relative small, and they should not give bad answer.
I have changed floats to doubles, only to see the same result.
So my answer is, why am I seeing this false output?
And eventually workaround without copying number to string and back.
Floating point numbers are not precise, and dealing with them has lots of idiosyncrasies.
What Every Computer Scientist Should Know About Floating-Point Arithmetic
I also enjoy Bruce Dawson's blog entries on floating point values.
Floating point numbers are numbers represented in binary with limited precision.
The error between expected result and actual result is caused by the fact, that the number 95.01 is infinitely periodical in binary representation.
Double has only 51 binary digits, thus there has to be some rounding before the number is stored in the double precision. Single precision has only 23 digits.
It is not possible to represent 95.01 in finite precision floatin point number without any error.
However, you may trust the first 6-9 decimal digits, thus you should format the number with some meaningfull format.
Ahh good, another one of us has become a man in the church of programming :)
Floating points are not exact, the precision will vary from machine to machine. 1.0f != 1.00000000000000000000000000000000000 and so on, it's more like 1.0000001002003400011 and so on (I just picked arbitrary numbers here).

Precision problems of real numbers in Fortran [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
I've been trying to use Fortran for my research project, with the GNU Fortran compiler (gfortran), latest version,
but I've been encountering some problems in the way it processes real numbers. If you have for example the code:
program test
implicit none
real :: y = 23.234, z
z = y * 100000
write(*,*) y, z
end program
You'll get as output:
23.23999 2323400.0
I find this really strange.
Can someone tell me what's exactly happening here? Looking at z I can see that y does retain its precision, so for calculations that shouldn't be a problem I suppose. But why is the output of y not exactly the same as the value that I've specified, and what can I do to make it exactly the same?
This is not a problem - all you see is floating-point representation of the number in the computer. The computer cannot handle real numbers exactly, but only approximations of them. A good read about this can be found here: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Simply by replacing real with double precision, you can increase the number of significant decimal places from about six to about 15 on most platforms.
The general issue is not limited to Fortran, but the representation of base 10 real numbers in another base of finite precision. This computer science question is asked many times here.
For the specifically Fortran aspects, the declaration "real" will likely give you a single precision floating point. As will expressing a constant as "23.234" without a type qualifier. The constant "100000" without a decimal point is an integer so the expression "y * 100000" is causing an implicit conversion of an integer to a real because "y" is a real variable.
For previous some previous discussions of these issues see Extended double precision , Fortran: integer*4 vs integer(4) vs integer(kind=4) and Is There a Better Double-Precision Assignment in Fortran 90?
The problem here is not with Fortran, in fact it is not a problem at all. This is just a feature of floating-point arithmetic. If you think about how you would represent 23.234 as a 'single float' in binary, you would see that the number has to be saved to only so many decimals of precision.
The thing to remember about float point number is: numbers that look round and even in base-10 probably won't in binary.
For a brief overview of floating-point topics, check the Wikipedia article. And for a VERY thorough explanation, check out the canonical paper by Goldberg (PDF).

What does -1.#IND00 mean? [duplicate]

I'm messing around with some C code using floats, and I'm getting 1.#INF00, -1.#IND00 and -1.#IND when I try to print floats in the screen. What does those values mean?
I believe that 1.#INF00 means positive infinity, but what about -1.#IND00 and -1.#IND? I also saw sometimes this value: 1.$NaN which is Not a Number, but what causes those strange values and how can those help me with debugging?
I'm using MinGW which I believe uses IEEE 754 representation for float point numbers.
Can someone list all those invalid values and what they mean?
From IEEE floating-point exceptions in C++ :
This page will answer the following questions.
My program just printed out 1.#IND or 1.#INF (on Windows) or nan or inf (on Linux). What happened?
How can I tell if a number is really a number and not a NaN or an infinity?
How can I find out more details at runtime about kinds of NaNs and infinities?
Do you have any sample code to show how this works?
Where can I learn more?
These questions have to do with floating point exceptions. If you get some strange non-numeric output where you're expecting a number, you've either exceeded the finite limits of floating point arithmetic or you've asked for some result that is undefined. To keep things simple, I'll stick to working with the double floating point type. Similar remarks hold for float types.
Debugging 1.#IND, 1.#INF, nan, and inf
If your operation would generate a larger positive number than could be stored in a double, the operation will return 1.#INF on Windows or inf on Linux. Similarly your code will return -1.#INF or -inf if the result would be a negative number too large to store in a double. Dividing a positive number by zero produces a positive infinity and dividing a negative number by zero produces a negative infinity. Example code at the end of this page will demonstrate some operations that produce infinities.
Some operations don't make mathematical sense, such as taking the square root of a negative number. (Yes, this operation makes sense in the context of complex numbers, but a double represents a real number and so there is no double to represent the result.) The same is true for logarithms of negative numbers. Both sqrt(-1.0) and log(-1.0) would return a NaN, the generic term for a "number" that is "not a number". Windows displays a NaN as -1.#IND ("IND" for "indeterminate") while Linux displays nan. Other operations that would return a NaN include 0/0, 0*∞, and ∞/∞. See the sample code below for examples.
In short, if you get 1.#INF or inf, look for overflow or division by zero. If you get 1.#IND or nan, look for illegal operations. Maybe you simply have a bug. If it's more subtle and you have something that is difficult to compute, see Avoiding Overflow, Underflow, and Loss of Precision. That article gives tricks for computing results that have intermediate steps overflow if computed directly.
For anyone wondering about the difference between -1.#IND00 and -1.#IND (which the question specifically asked, and none of the answers address):
-1.#IND00
This specifically means a non-zero number divided by zero, e.g. 3.14 / 0 (source)
-1.#IND (a synonym for NaN)
This means one of four things (see wiki from source):
1) sqrt or log of a negative number
2) operations where both variables are 0 or infinity, e.g. 0 / 0
3) operations where at least one variable is already NaN, e.g. NaN * 5
4) out of range trig, e.g. arcsin(2)
Your question "what are they" is already answered above.
As far as debugging (your second question) though, and in developing libraries where you want to check for special input values, you may find the following functions useful in Windows C++:
_isnan(), _isfinite(), and _fpclass()
On Linux/Unix you should find isnan(), isfinite(), isnormal(), isinf(), fpclassify() useful (and you may need to link with libm by using the compiler flag -lm).
For those of you in a .NET environment the following can be a handy way to filter non-numbers out (this example is in VB.NET, but it's probably similar in C#):
If Double.IsNaN(MyVariableName) Then
MyVariableName = 0 ' Or whatever you want to do here to "correct" the situation
End If
If you try to use a variable that has a NaN value you will get the following error:
Value was either too large or too small for a Decimal.