This question already has answers here:
Floating point inaccuracy examples
(7 answers)
Closed 3 years ago.
I have a FLOAT column in a SQL Server database that appears as follows in SQL Server Management Studio.
18.001
When I read that value into a float variable, and format it using sprintf() ("%f"), it appears as:
18.000999
When I read that value into a double variable, and format it using sprintf(), it appears as:
18.001000
Could I get some suggestions on this? The values being stored are generally under 100, with up to 3 decimal places. What is the best SQL Server type? What is the best C++ type? And should I be using some rounding technique to get it in the format I want?
Note: I'm not actually using sprintf(), I'm using CString.Format(), but the expected behavior is the same.
The values being stored are generally under 100, with up to 3 decimal places.
SQL databases support the numeric/decimal (the two are synonyms) types for fixed-point values. For your specific type, you could use decimal(6, 3). That is six significant digits, with three of them to the right of the decimal point. These two values are called scale and precision respectively.
If the values can differ a bit from this, you might want a wider range.
With decimal/numeric, what-you-see-is-what-you-get. I would recommend storing them in the database as fixed-point numbers.
Answering the question on it's face value, assuming floating point should be used and fixed point is not applicable.
Unless you are really tight on memory, there is really no reason to use anything for floating numbers in C++ but double. Float looses precision without giving you much in return. You can also try long double, but in my experience it is rather overkill. Also, if your compiler is MSVC, I have heard it's long doubles are the same as doubles.
In alternative to the fixed comma decimals proposed already, just use ordinary integers!
Instead of storing 18.001 seconds, you'd store 18001 milliseconds, you wouldn't store Euro, Pound, Dollar, but tenth of a cent or penny, ...
Type in C++ would be an integer as well, large enough to hold maximum numbers you need, e. g. uint32_t, int64_t, ...
In fortran I have to round latitude and longitude to one digit after decimal point.
I am using gfortran compiler and the nint function but the following does not work:
print *, nint( 1.40 * 10. ) / 10. ! prints 1.39999998
print *, nint( 1.49 * 10. ) / 10. ! prints 1.50000000
Looking for both general and specific solutions here. For example:
How can we display numbers rounded to one decimal place?
How can we store such rounded numbers in fortran. It's not possible in a float variable, but are there other ways?
How can we write such numbers to NetCDF?
How can we write such numbers to a CSV or text file?
As others have said, the issue is the use of floating point representation in the NetCDF file. Using nco utilities, you can change the latitude/longitude to short integers with scale_factor and add_offset. Like this:
ncap2 -s 'latitude=pack(latitude, 0.1, 0); longitude=pack(longitude, 0.1, 0);' old.nc new.nc
There is no way to do what you are asking. The underlying problem is that the rounded values you desire are not necessarily able to be represented using floating point.
For example, if you had a value 10.58, this is represented exactly as 1.3225000 x 2^3 = 10.580000 in IEEE754 float32.
When you round this to value to one decimal point (however you choose to do so), the result would be 10.6, however 10.6 does not have an exact representation. The nearest representation is 1.3249999 x 2^3 = 10.599999 in float32. So no matter how you deal with the rounding, there is no way to store 10.6 exactly in a float32 value, and no way to write it as a floating point value into a netCDF file.
YES, IT CAN BE DONE! The "accepted" answer above is correct in its limited range, but is wrong about what you can actually accomplish in Fortran (or various other HGL's).
The only question is what price are you willing to pay, if the something like a Write with F(6.1) fails?
From one perspective, your problem is a particularly trivial variation on the subject of "Arbitrary Precision" computing. How do you imagine cryptography is handled when you need to store, manipulate, and perform "math" with, say, 1024 bit numbers, with exact precision?
A simple strategy in this case would be to separate each number into its constituent "LHSofD" (Left Hand Side of Decimal), and "RHSofD" values. For example, you might have an RLon(i,j) = 105.591, and would like to print 105.6 (or any manner of rounding) to your netCDF (or any normal) file. Split this into RLonLHS(i,j) = 105, and RLonRHS(i,j) = 591.
... at this point you have choices that increase generality, but at some expense. To save "money" the RHS might be retained as 0.591 (but loose generality if you need to do fancier things).
For simplicity, assume the "cheap and cheerful" second strategy.
The LHS is easy (Int()).
Now, for the RHS, multiply by 10 (if, you wish to round to 1 DEC), e.g. to arrive at RLonRHS(i,j) = 5.91, and then apply Fortran "round to nearest Int" NInt() intrinsic ... leaving you with RLonRHS(i,j) = 6.0.
... and Bob's your uncle:
Now you print the LHS and RHS to your netCDF using a suitable Write statement concatenating the "duals", and will created an EXACT representation as per the required objectives in the OP.
... of course later reading-in those values returns to the same issues as illustrated above, unless the read-in also is ArbPrec aware.
... we wrote our own ArbPrec lib, but there are several about, also in VBA and other HGL's ... but be warned a full ArbPrec bit of machinery is a non-trivial matter ... lucky you problem is so simple.
There are several aspects one can consider in relation to "rounding to one decimal place". These relate to: internal storage and manipulation; display and interchange.
Display and interchange
The simplest aspects cover how we report stored value, regardless of the internal representation used. As covered in depth in other answers and elsewhere we can use a numeric edit descriptor with a single fractional digit:
print '(F0.1,2X,F0.1)', 10.3, 10.17
end
How the output is rounded is a changeable mode:
print '(RU,F0.1,2X,RD,F0.1)', 10.17, 10.17
end
In this example we've chosen to round up and then down, but we could also round to zero or round to nearest (or let the compiler choose for us).
For any formatted output, whether to screen or file, such edit descriptors are available. A G edit descriptor, such as one may use to write CSV files, will also do this rounding.
For unformatted output this concept of rounding is not applicable as the internal representation is referenced. Equally for an interchange format such as NetCDF and HDF5 we do not have this rounding.
For NetCDF your attribute convention may specify something like FORTRAN_format which gives an appropriate format for ultimate display of the (default) real, non-rounded, variable .
Internal storage
Other answers and the question itself mention the impossibility of accurately representing (and working with) decimal digits. However, nothing in the Fortran language requires this to be impossible:
integer, parameter :: rk = SELECTED_REAL_KIND(radix=10)
real(rk) x
x = 0.1_rk
print *, x
end
is a Fortran program which has a radix-10 variable and literal constant. See also IEEE_SELECTED_REAL_KIND(radix=10).
Now, you are exceptionally likely to see that selected_real_kind(radix=10) gives you the value -5, but if you want something positive that can be used as a type parameter you just need to find someone offering you such a system.
If you aren't able to find such a thing then you will need to work accounting for errors. There are two parts to consider here.
The intrinsic real numerical types in Fortran are floating point ones. To use a fixed point numeric type, or a system like binary-coded decimal, you will need to resort to non-intrinsic types. Such a topic is beyond the scope of this answer, but pointers are made in that direction by DrOli.
These efforts will not be computationally/programmer-time cheap. You will also need to take care of managing these types in your output and interchange.
Depending on the requirements of your work, you may find simply scaling by (powers of) ten and working on integers suits. In such cases, you will also want to find the corresponding NetCDF attribute in your convention, such as scale_factor.
Relating to our internal representation concerns we have similar rounding issues to output. For example, if my input data has a longitude of 10.17... but I want to round it in my internal representation to (the nearest representable value to) a single decimal digit (say 10.2/10.1999998) and then work through with that, how do I manage that?
We've seen how nint(10.17*10)/10. gives us this, but we've also learned something about how numeric edit descriptors do this nicely for output, including controlling the rounding mode:
character(10) :: intermediate
real :: rounded
write(intermediate, '(RN,F0.1)') 10.17
read(intermediate, *) rounded
print *, rounded ! This may look not "exact"
end
We can track the accumulation of errors here if this is desired.
The `round_x = nint(x*10d0)/10d0' operator rounds x (for abs(x) < 2**31/10, for large numbers use dnint()) and assigns the rounded value to the round_x variable for further calculations.
As mentioned in the answers above, not all numbers with one significant digit after the decimal point have an exact representation, for example, 0.3 does not.
print *, 0.3d0
Output:
0.29999999999999999
To output a rounded value to a file, to the screen, or to convert it to a string with a single significant digit after the decimal point, use edit descriptor 'Fw.1' (w - width w characters, 0 - variable width). For example:
print '(5(1x, f0.1))', 1.30, 1.31, 1.35, 1.39, 345.46
Output:
1.3 1.3 1.4 1.4 345.5
#JohnE, using 'G10.2' is incorrect, it rounds the result to two significant digits, not to one digit after the decimal point. Eg:
print '(g10.2)', 345.46
Output:
0.35E+03
P.S.
For NetCDF, rounding should be handled by NetCDF viewer, however, you can output variables as NC_STRING type:
write(NetCDF_out_string, '(F0.1)') 1.49
Or, alternatively, get "beautiful" NC_FLOAT/NC_DOUBLE numbers:
beautiful_float_x = nint(x*10.)/10. + epsilon(1.)*nint(x*10.)/10./2.
beautiful_double_x = dnint(x*10d0)/10d0 + epsilon(1d0)*dnint(x*10d0)/10d0/2d0
P.P.S. #JohnE
The preferred solution is not to round intermediate results in memory or in files. Rounding is performed only when the final output of human-readable data is issued;
Use print with edit descriptor ‘Fw.1’, see above;
There are no simple and reliable ways to accurately store rounded numbers (numbers with a decimal fixed point):
2.1. Theoretically, some Fortran implementations can support decimal arithmetic, but I am not aware of implementations that in which ‘selected_real_kind(4, 4, 10)’ returns a value other than -5;
2.2. It is possible to store rounded numbers as strings;
2.3. You can use the Fortran binding of GIMP library. Functions with the mpq_ prefix are designed to work with rational numbers;
There are no simple and reliable ways to write rounded numbers in a netCDF file while preserving their properties for the reader of this file:
3.1. netCDF supports 'Packed Data Values‘, i.e. you can set an integer type with the attributes’ scale_factor‘,’ add_offset' and save arrays of integers. But, in the file ‘scale_factor’ will be stored as a floating number of single or double precision, i.e. the value will differ from 0.1. Accordingly, when reading, when calculating by the netCDF library unpacked_data_value = packed_data_value*scale_factor + add_offset, there will be a rounding error. (You can set scale_factor=0.1*(1.+epsilon(1.)) or scale_factor=0.1d0*(1d0+epsilon(1d0)) to exclude a large number of digits '9'.);
3.2. There are C_format and FORTRAN_format attributes. But it is quite difficult to predict which reader will use which attribute and whether they will use them at all;
3.3. You can store rounded numbers as strings or user-defined types;
Use write() with edit descriptor ‘Fw.1’, see above.
It's commonly said that "SAS missing values equal minus infinity". But There is a problem with that statement, since there can be 27 or 28 "flavors" of missing values (the default . and .a to .z and ._), each having a predefined sort order.
Since it can't be that some infinities are larger than others, I came to understand that:
Missing values are treated like minus infinity when compared to valid numerical data, and that
When compared to other missing values, they are ranked with another set of predefined rules.
So my question is: at the lowest level, how does SAS store numerical data in a way that it can distinguish the missing from the non-missing numerical values? Is there a "missingness bit" like there is a "sign bit"?
SAS stores numbers as floating point values using 64bit IEEE format. They picked 28 specific bit combinations and use them to represent ., ._, and .a to .z. By convention they are ordered ._ to . to .a to .z. I am not sure if the values were picked to make it easier to test that ordering, or if the ordering was an accident of the particular bit patterns they used.
You can look at the bit patterns used by peeking into the values that are stored.
data _null_;
length i 8 str $8 ;
do i=._,.,.a,.z,constant('small'),0,1,constant('big');
str=peekclong(addrlong(i));
str=reverse(str);
put i best12. #15 i hex16. #35 str $hex16. ;
end;
run;
result
_ _ FFFFFF0000000000
. . FFFFFE0000000000
A A FFFFFD0000000000
Z Z FFFFE40000000000
2.22507E-308 0010000000000000 0010000000000000
0 0000000000000000 0000000000000000
1 3FF0000000000000 3FF0000000000000
1.797693E308 7FEFFFFFFFFFFFFF 7FEFFFFFFFFFFFFF
I am using Firebird 1.5.x database and it has problems with variables that are stored in double precision or numeric(15,2) fields. E.g. I can issue update (field_1 and field_2 are declared as numeric(15,2)):
update test_table set
field_1=0.34,
field_2=0.69;
But when field_1 and field_2 is read into variables var_1 and var_2 inside the SQL stored procedure, then var_1 and var_2 assume values 0.340000000000000024 and 0.689999999999999947 respectively. The multiplication var_1*var_2*25 gives 5.8649999999999999635 which can be rounded as 5.86. This ir wrong apparently, because the correct final value is 5.87.
The rounding is done with user defined function which comes from C++ DLL which I develop. The idea is to detect situations with represenation error, make correction and apply the rounding procedure to the corrected values only.
So, the question is - how to detect and correct represenation errors. E.g.
detect that 0.689999999999999947 should be corrected to 0.69 and detect, that 0.340000000000000024 should be corrected to 0.34. Generally there can be situation when the number of significant numbers after point is more or less than 2, e.g. 0.23459999999999999854 should be corrected to 0.2346.
Is if possible to do in C++ and maybe some solutions already exist for this?
p.s. I tested this case in more recent Firebird versions 2.x and there is no problem with reading database fields into variables in stored procedure. But I can guess that nevertheless the representation errors can arise in more recent Firebird versions too during some lengthy calculations.
Thanks!
This problem exists in all programming languages where floating point variables are used.
You need to understand that there is no accurate way to display 1/3. Any display of that value is a compromise that must be agreed to between the concerned parties. The required precision is what must be agreed to. If you understand that, we can move on.
So how do we (for example) develop accounting systems with accuracy? One approach is to round(column_name, required_precision) all values as they are used in calculations. Also round the result of any division to the same precision. It is important to constently use the agreed-to precision throughout the application.
Another alternative is to always multiply the floating-point values by 100 (if your values represent currency) and assign the resulting values to Integer variables. Again, the result of a division must be rounded to 0 decimal places before it is assigned to it's integer storage. Values are then divided by 100 for display purposes. This is as good as it gets.
So I want to compare a summation field from 2 tables based on certain grouped variables. But because I don't care of any difference smaller than .000099, I rounded the field to the 4th decimal before using PROC COMPARE, but I'm still seeing differences smaller than .000099.
I don't want to use the METHOD arguement in PROC COMPARE.
Try the criterion option rather than method:
proc compare data = x criterion = 0.0001;
Some discussion can be found here under The Equality Criterion.
Edit: As Joe points out this implicitly sets method = relative, so to fit the question method = absolute would also be necessary. But unfortunately that falls short of Jayesh's request...
If you absolutely can't stand to change METHOD, then you do have the option of FUZZ, which will allow you to hide differences less than the fuzz factor. It doesn't make the differences go away - they still flag as different - but it hides the difference (any difference < FUZZ will be shown as zero or missing depending on the context). You would then have to postprocess your dataset or report in order to eliminate those differences by hand.
If you're seeing differences like this after rounding, what you're likely seeing is issues caused by floating point precision. Even with rounding, the next significant digit can be affected; you would need to round to something even less significant to be sure of 'true' .0001 being suppressed. [IE, rounding doesn't work perfectly because the rounded number still has to be stored as a numeric - and since you round to decimal values, not to binary, it doesn't guarantee a correctly storable number.]