Setting float precision in hdf5 dataset - python-2.7

I'm surprised I wasn't able to find an answer to this question. I'm writing float values to an hdf5 dataset, and I want to set the precision at 10 decimals. From the documentation on hdf5 datasets, there doesn't seem to be any way to set precision. The closest I get is doing either 'float32' or 'float64', but 'float32' cuts off my numbers. File size is a big concern for me, and the unnecessary digits for 'float64' make the file significantly larger. Is it possible to choose precision with hdf5?
An example of my issue:
With the true value of data[0] being 0.0066896507
group.create_dataset(name, data=data, dtype='float64')
data[0] yields 0.0066896506999999999, but
group.create_dataset(name, data=data, dtype='float32')
gives me 0.0066896505, which is incorrect. Other numbers in the dataset are even more incorrect.
It's also odd, because when I do
x = h5py.File(my_file,'r')
print(x['dataset'][0])
it gives me the correct number. But when I just type x['dataset'][0] into the console, it gives what I wrote above. How is the data actually being stored? Is it really giving those extra digits? As you can see I'm a little new to hdf5 (and python in general). Thanks for the help.

To create custom precision types, you'll need to drop to the low-level bindings of h5py, specifically the function/types outlined http://api.h5py.org/h5t.html#atomic-classes. See https://github.com/h5py/h5py/blob/master/h5py/h5t.pyx#L202 for an example of how this is done (for half/16-bit floats).
However, this probably isn't what you want (given the reference to decimal digits). Whilst base-10 based floating point numbers exist (see e.g. https://en.wikipedia.org/wiki/Decimal64_floating-point_format), in practice if you're using python all floating point numbers are base-2. This means you care about the number of bits it's stored in (and what format, see https://en.wikipedia.org/wiki/IEEE_754#Basic_and_interchange_formats). Also worth noting is it's entirely possible to print more digits than you have precision for (e.g. I can print float32 which stores ~7 significant figures with 30 significant figures, but that doesn't mean I have 30 significant figures worth of precision). So based on the fact that you care about at least 10 significant figures worth of precision, you should use float64 (which is also known as double, binary64)
If you are concerned about file size, it's worth looking at h5py's compression support, see http://docs.h5py.org/en/latest/high/dataset.html#filter-pipeline.

Related

Why cents for std::put_money()?

I'm wondering why the std::put_money() function accepts cents instead of dollars. Also looking at the definition on cppreference, it does not say what the input number should be.
Is it true that whatever the currency we have to use a decimal number at the lowest possible decimal value of said currency? (i.e. so * 1.0, * 100.0, or * 1000.0 as the case may be?) Because that seems to incorporate knowledge of the currency opposed to the current locale...
The general idea is that you don't want to use floating point with currency, because values with a finite number of decimal digits can be periodic in binary, and given that floating point values have finite precision this leads to surprises when summing them; the usual example is
#include <stdio.h>
int main(void) {
double v = 0.;
for(int i=0; i<10; ++i) v+=0.1;
printf("%0.18g\n", v-1.0f);
return 0;
}
which prints -1.11022302462515654e-16.
A simple approach to deal with the problem is to use integral values for "the smallest non-fractional units of the currency" (thanks #Justin for the quote); this makes sure that when the user inputs $ 0.10 it's exactly represented, and does not lead to any rounding surprise, at least as long as we are dealing with values where exact precision is expected.
This is fine and explains the cents, but why long double and not some integral type? Here I'm speculating, but I see two reasonable motivations:
fractional amounts of currency are something that exists, typically for unitary prices (e.g. the price per liter of gasoline); the precision there is generally less of an issue - you are going to multiply it by another floating point value anyway - but you want to be able to read such values;
but most importantly, historically floating point values had the best precision over a wide spectrum of platforms, even for integral values. long long (guaranteed to be at least 64 bit) is a recent addition to the standard, and long was generally 32 bit wide: it would have capped monetary values to a meager ~21 million dollars.
OTOH, even a plain double on most platforms has a 53 digits mantissa, which means that it can represent exactly integral values up to 9007199254740991 - so, something like 90 thousand billion dollars; that's good enough to represent exactly the US public debt down to cents, so it's probably precise enough for pretty much anything else. They probably chose long double as "the biggest hammer they can throw at the problem" (even if nowadays it's generally as big as a plain double).
Because that seems to incorporate knowledge of the currency opposed to the current locale...
Yes and no; I think that the idea was that, as long as you use the relevant locale facets both for input and for output, you simply shouldn't really care - the library should do the conversions for you, and you just work with numbers whose exact magnitude shouldn't really matter to you.
That's the theory; but as said in the comments, C and C++ locales are a badly designed piece of software, with an overly complicated design which however falls short when tested for real-world usage.
Honestly, I would never use this stuff "for real":
you can never be sure of how updated the standard library is, how broken it is (I once had VC++ not being able to do a roundtrip of Italian-localized numbers), if it actually supports the currencies you care about.
you do need to care about what is its idea of "smallest non-fractional unit of the currency" if you need to talk with anything besides textual IO in the format expected by the library - say, you have to get the price of a stock from a web service, or if you have built-in data to combine with the user input;
same for serialization in a machine readable format; you don't want to expose yourself to the vagaries of your C runtime and of OS configuration when storing the user data, especially if they are to be exchanged with other applications, especially if said applications run on a different C runtime (it may even be your own application compiled for a different operating system!) or a different language.

Fortran - want to round to one decimal point

In fortran I have to round latitude and longitude to one digit after decimal point.
I am using gfortran compiler and the nint function but the following does not work:
print *, nint( 1.40 * 10. ) / 10. ! prints 1.39999998
print *, nint( 1.49 * 10. ) / 10. ! prints 1.50000000
Looking for both general and specific solutions here. For example:
How can we display numbers rounded to one decimal place?
How can we store such rounded numbers in fortran. It's not possible in a float variable, but are there other ways?
How can we write such numbers to NetCDF?
How can we write such numbers to a CSV or text file?
As others have said, the issue is the use of floating point representation in the NetCDF file. Using nco utilities, you can change the latitude/longitude to short integers with scale_factor and add_offset. Like this:
ncap2 -s 'latitude=pack(latitude, 0.1, 0); longitude=pack(longitude, 0.1, 0);' old.nc new.nc
There is no way to do what you are asking. The underlying problem is that the rounded values you desire are not necessarily able to be represented using floating point.
For example, if you had a value 10.58, this is represented exactly as 1.3225000 x 2^3 = 10.580000 in IEEE754 float32.
When you round this to value to one decimal point (however you choose to do so), the result would be 10.6, however 10.6 does not have an exact representation. The nearest representation is 1.3249999 x 2^3 = 10.599999 in float32. So no matter how you deal with the rounding, there is no way to store 10.6 exactly in a float32 value, and no way to write it as a floating point value into a netCDF file.
YES, IT CAN BE DONE! The "accepted" answer above is correct in its limited range, but is wrong about what you can actually accomplish in Fortran (or various other HGL's).
The only question is what price are you willing to pay, if the something like a Write with F(6.1) fails?
From one perspective, your problem is a particularly trivial variation on the subject of "Arbitrary Precision" computing. How do you imagine cryptography is handled when you need to store, manipulate, and perform "math" with, say, 1024 bit numbers, with exact precision?
A simple strategy in this case would be to separate each number into its constituent "LHSofD" (Left Hand Side of Decimal), and "RHSofD" values. For example, you might have an RLon(i,j) = 105.591, and would like to print 105.6 (or any manner of rounding) to your netCDF (or any normal) file. Split this into RLonLHS(i,j) = 105, and RLonRHS(i,j) = 591.
... at this point you have choices that increase generality, but at some expense. To save "money" the RHS might be retained as 0.591 (but loose generality if you need to do fancier things).
For simplicity, assume the "cheap and cheerful" second strategy.
The LHS is easy (Int()).
Now, for the RHS, multiply by 10 (if, you wish to round to 1 DEC), e.g. to arrive at RLonRHS(i,j) = 5.91, and then apply Fortran "round to nearest Int" NInt() intrinsic ... leaving you with RLonRHS(i,j) = 6.0.
... and Bob's your uncle:
Now you print the LHS and RHS to your netCDF using a suitable Write statement concatenating the "duals", and will created an EXACT representation as per the required objectives in the OP.
... of course later reading-in those values returns to the same issues as illustrated above, unless the read-in also is ArbPrec aware.
... we wrote our own ArbPrec lib, but there are several about, also in VBA and other HGL's ... but be warned a full ArbPrec bit of machinery is a non-trivial matter ... lucky you problem is so simple.
There are several aspects one can consider in relation to "rounding to one decimal place". These relate to: internal storage and manipulation; display and interchange.
Display and interchange
The simplest aspects cover how we report stored value, regardless of the internal representation used. As covered in depth in other answers and elsewhere we can use a numeric edit descriptor with a single fractional digit:
print '(F0.1,2X,F0.1)', 10.3, 10.17
end
How the output is rounded is a changeable mode:
print '(RU,F0.1,2X,RD,F0.1)', 10.17, 10.17
end
In this example we've chosen to round up and then down, but we could also round to zero or round to nearest (or let the compiler choose for us).
For any formatted output, whether to screen or file, such edit descriptors are available. A G edit descriptor, such as one may use to write CSV files, will also do this rounding.
For unformatted output this concept of rounding is not applicable as the internal representation is referenced. Equally for an interchange format such as NetCDF and HDF5 we do not have this rounding.
For NetCDF your attribute convention may specify something like FORTRAN_format which gives an appropriate format for ultimate display of the (default) real, non-rounded, variable .
Internal storage
Other answers and the question itself mention the impossibility of accurately representing (and working with) decimal digits. However, nothing in the Fortran language requires this to be impossible:
integer, parameter :: rk = SELECTED_REAL_KIND(radix=10)
real(rk) x
x = 0.1_rk
print *, x
end
is a Fortran program which has a radix-10 variable and literal constant. See also IEEE_SELECTED_REAL_KIND(radix=10).
Now, you are exceptionally likely to see that selected_real_kind(radix=10) gives you the value -5, but if you want something positive that can be used as a type parameter you just need to find someone offering you such a system.
If you aren't able to find such a thing then you will need to work accounting for errors. There are two parts to consider here.
The intrinsic real numerical types in Fortran are floating point ones. To use a fixed point numeric type, or a system like binary-coded decimal, you will need to resort to non-intrinsic types. Such a topic is beyond the scope of this answer, but pointers are made in that direction by DrOli.
These efforts will not be computationally/programmer-time cheap. You will also need to take care of managing these types in your output and interchange.
Depending on the requirements of your work, you may find simply scaling by (powers of) ten and working on integers suits. In such cases, you will also want to find the corresponding NetCDF attribute in your convention, such as scale_factor.
Relating to our internal representation concerns we have similar rounding issues to output. For example, if my input data has a longitude of 10.17... but I want to round it in my internal representation to (the nearest representable value to) a single decimal digit (say 10.2/10.1999998) and then work through with that, how do I manage that?
We've seen how nint(10.17*10)/10. gives us this, but we've also learned something about how numeric edit descriptors do this nicely for output, including controlling the rounding mode:
character(10) :: intermediate
real :: rounded
write(intermediate, '(RN,F0.1)') 10.17
read(intermediate, *) rounded
print *, rounded ! This may look not "exact"
end
We can track the accumulation of errors here if this is desired.
The `round_x = nint(x*10d0)/10d0' operator rounds x (for abs(x) < 2**31/10, for large numbers use dnint()) and assigns the rounded value to the round_x variable for further calculations.
As mentioned in the answers above, not all numbers with one significant digit after the decimal point have an exact representation, for example, 0.3 does not.
print *, 0.3d0
Output:
0.29999999999999999
To output a rounded value to a file, to the screen, or to convert it to a string with a single significant digit after the decimal point, use edit descriptor 'Fw.1' (w - width w characters, 0 - variable width). For example:
print '(5(1x, f0.1))', 1.30, 1.31, 1.35, 1.39, 345.46
Output:
1.3 1.3 1.4 1.4 345.5
#JohnE, using 'G10.2' is incorrect, it rounds the result to two significant digits, not to one digit after the decimal point. Eg:
print '(g10.2)', 345.46
Output:
0.35E+03
P.S.
For NetCDF, rounding should be handled by NetCDF viewer, however, you can output variables as NC_STRING type:
write(NetCDF_out_string, '(F0.1)') 1.49
Or, alternatively, get "beautiful" NC_FLOAT/NC_DOUBLE numbers:
beautiful_float_x = nint(x*10.)/10. + epsilon(1.)*nint(x*10.)/10./2.
beautiful_double_x = dnint(x*10d0)/10d0 + epsilon(1d0)*dnint(x*10d0)/10d0/2d0
P.P.S. #JohnE
The preferred solution is not to round intermediate results in memory or in files. Rounding is performed only when the final output of human-readable data is issued;
Use print with edit descriptor ‘Fw.1’, see above;
There are no simple and reliable ways to accurately store rounded numbers (numbers with a decimal fixed point):
2.1. Theoretically, some Fortran implementations can support decimal arithmetic, but I am not aware of implementations that in which ‘selected_real_kind(4, 4, 10)’ returns a value other than -5;
2.2. It is possible to store rounded numbers as strings;
2.3. You can use the Fortran binding of GIMP library. Functions with the mpq_ prefix are designed to work with rational numbers;
There are no simple and reliable ways to write rounded numbers in a netCDF file while preserving their properties for the reader of this file:
3.1. netCDF supports 'Packed Data Values‘, i.e. you can set an integer type with the attributes’ scale_factor‘,’ add_offset' and save arrays of integers. But, in the file ‘scale_factor’ will be stored as a floating number of single or double precision, i.e. the value will differ from 0.1. Accordingly, when reading, when calculating by the netCDF library unpacked_data_value = packed_data_value*scale_factor + add_offset, there will be a rounding error. (You can set scale_factor=0.1*(1.+epsilon(1.)) or scale_factor=0.1d0*(1d0+epsilon(1d0)) to exclude a large number of digits '9'.);
3.2. There are C_format and FORTRAN_format attributes. But it is quite difficult to predict which reader will use which attribute and whether they will use them at all;
3.3. You can store rounded numbers as strings or user-defined types;
Use write() with edit descriptor ‘Fw.1’, see above.

`std::sin` is wrong in the last bit

I am porting some program from Matlab to C++ for efficiency. It is important for the output of both programs to be exactly the same (**).
I am facing different results for this operation:
std::sin(0.497418836818383950) = 0.477158760259608410 (C++)
sin(0.497418836818383950) = 0.47715876025960846000 (Matlab)
N[Sin[0.497418836818383950], 20] = 0.477158760259608433 (Mathematica)
So, as far as I know both C++ and Matlab are using IEEE754 defined double arithmetic. I think I have read somewhere that IEEE754 allows differents results in the last bit. Using mathematica to decide, seems like C++ is more close to the result. How can I force Matlab to compute the sin with precision to the last bit included, so that the results are the same?
In my program this behaviour leads to big errors because the numerical differential equation solver keeps increasing this error in the last bit. However I am not sure that C++ ported version is correct. I am guessing that even if the IEEE754 allows the last bit to be different, somehow guarantees that this error does not get bigger when using the result in more IEEE754 defined double operations (because otherwise, two different programs correct according to the IEEE754 standard could produce completely different outputs). So the other question is Am I right about this?
I would like get an answer to both bolded questions. Edit: The first question is being quite controversial, but is the less important, can someone comment about the second one?
Note: This is not an error in the printing, just in case you want to check, this is how I obtained these results:
http://i.imgur.com/cy5ToYy.png
Note (**): What I mean by this is that the final output, which are the results of some calculations showing some real numbers with 4 decimal places, need to be exactly the same. The error I talk about in the question gets bigger (because of more operations, each of one is different in Matlab and in C++) so the final differences are huge) (If you are curious enough to see how the difference start getting bigger, here is the full output [link soon], but this has nothing to do with the question)
Firstly, if your numerical method depends on the accuracy of sin to the last bit, then you probably need to use an arbitrary precision library, such as MPFR.
The IEEE754 2008 standard doesn't require that the functions be correctly rounded (it does "recommend" it though). Some C libms do provide correctly rounded trigonometric functions: I believe that the glibc libm does (typically used on most linux distributions), as does CRlibm. Most other modern libms will provide trig functions that are within 1 ulp (i.e. one of the two floating point values either side of the true value), often termed faithfully rounded, which is much quicker to compute.
None of those values you printed could actually arise as IEEE 64bit floating point values (even if rounded): the 3 nearest (printed to full precision) are:
0.477158760259608 405451814405751065351068973541259765625
0.477158760259608 46096296563700889237225055694580078125
0.477158760259608 516474116868266719393432140350341796875
The possible values you could want are:
The exact sin of the decimal .497418836818383950, which is
0.477158760259608 433132061388630377105954125778369485736356219...
(this appears to be what Mathematica gives).
The exact sin of the 64-bit float nearest .497418836818383950:
0.477158760259608 430531153841011107415427334794384396325832953...
In both cases, the first of the above list is the nearest (though only barely in the case of 1).
The sine of the double constant you wrote is about 0x1.e89c4e59427b173a8753edbcb95p-2, whose nearest double is 0x1.e89c4e59427b1p-2. To 20 decimal places, the two closest doubles are 0.47715876025960840545 and 0.47715876025960846096.
Perhaps Matlab is displaying a truncated value? (EDIT: I now see that the fourth-last digit is a 6, not a 0. Matlab is giving you a result that's still faithfully-rounded, but it's the farther of the two closest doubles to the desired result. And it's still printing out the wrong number.
I should also point out that Mathematica is probably trying to solve a different problem---compute the sine of the decimal number 0.497418836818383950 to 20 decimal places. You should not expect this to match either the C++ code's result or Matlab's result.

Is it possble to combine number of float values into one float value and extract the values when needed?

Am working on an algorithm for an iPhone app, where the data i need to keep in memory is exceeding the limit, so is it possible to represent number of float numbers as one float value and retrieve those value when i need.
For instance:
float array[4];
array[0]=0.12324;
array[1]=0.56732;
array[2]=0.86555;
array[3]=0.34545;
float combinedvalue=?
Not in general, no. You can't store 4N bits of information in only N bits.
If there's some patten in your numbers, then you might find a scheme. For example, if all your numbers are of similar value, you could potentially store only the differences between the numbers in lower precision.
However, this kind of thing is difficult, and limited.
If those numbers are exactly 5 digits each, you can treat them as ints by multiplying with 100000. Then you'll need 17 bits for each number, 68 bits in total, which (with some bit-shifting) takes up 9 bytes. Does that help, 9 bytes instead of 16?
Please note that the implementation of your algorithm will also take up memory!
What you are requiring could be accomplished in several different ways.
For instance, in c++ you generally have single precision floats (4 bytes) as the smallest precision available, though I wouldn't be surprised if there are other packages that handle smaller precision floating point values.
Therefore, if you are using double precision floating point values and can get by with less precision then you can switch to a smaller precision.
Now, depending on your range of values you want to store, you might be able to use a fixed-point representation as well, but you will need to be familiar with the nuances of bit shifting and masking, etc. But, another added benefit of this approach is that it could make your program run faster since fixed-point (integer) arithmetic is much faster than floating-point arithmetic.
The choice of options depends on your data you need to store and how comfortable you are with lower level binary arithmetic.

How do I compress a large number of similar doubles?

I want to store billions (10^9) of double precision floating point numbers in memory and save space. These values are grouped in thousands of ordered sets (they are time series), and within a set, I know that the difference between values is usually not large (compared to their absolute value). Also, the closer to each other, the higher the probability of the difference being relatively small.
A perfect fit would be a delta encoding that stores only the difference of each value to its predecessor. However, I want random access to subsets of the data, so I can't depend on going through a complete set in sequence. I'm therefore using deltas to a set-wide baseline that yields deltas which I expect to be within 10 to 50 percent of the absolute value (most of the time).
I have considered the following approaches:
divide the smaller value by the larger one, yielding a value between 0 and 1 that could be stored as an integer of some fixed precision plus one bit for remembering which number was divided by which. This is fairly straightforward and yields satisfactory compression, but is not a lossless method and thus only a secondary choice.
XOR the IEEE 754 binary64 encoded representations of both values and store the length of the long stretches of zeroes at the beginning of the exponent and mantissa plus the remaining bits which were different. Here I'm quite unsure how to judge the compression, although I think it should be good in most cases.
Are there standard ways to do this? What might be problems about my approaches above? What other solutions have you seen or used yourself?
Rarely are all the bits of a double-precision number meaningful.
If you have billions of values that are the result of some measurement, find the calibration and error of your measurement device. Quantize the values so that you only work with meaningful bits.
Often, you'll find that you only need 16 bits of actual dynamic range. You can probably compress all of this into arrays of "short" that retain all of the original input.
Use a simple "Z-score technique" where every value is really a signed fraction of the standard deviation.
So a sequence of samples with a mean of m and a standard deviation of s gets transformed into a bunch of Z score. Normal Z-score transformations use a double, but you should use a fixed-point version of that double. s/1000 or s/16384 or something that retains only the actual precision of your data, not the noise bits on the end.
for u in samples:
z = int( 16384*(u-m)/s )
for z in scaled_samples:
u = s*(z/16384.0)+m
Your Z-scores retain a pleasant easy-to-work with statistical relationship with the original samples.
Let's say you use a signed 16-bit Z-score. You have +/- 32,768. Scale this by 16,384 and your Z-scores have an effective resolution of 0.000061 decimal.
If you use a signed 24-but Z-score, you have +/- 8 million. Scale this by 4,194,304 and you have a resolution of 0.00000024.
I seriously doubt you have measuring devices this accurate. Further, any arithmetic done as part of filter, calibration or noise reduction may reduce the effective range because of noise bits introduced during the arithmetic. A badly thought-out division operator could make a great many of your decimal places nothing more than noise.
Whatever compression scheme you pick, you can decouple that from the problem of needing to be able to perform arbitrary seeks by compressing into fixed-size blocks and prepending to each block a header containing all the data required to decompress it (e.g. for a delta encoding scheme, the block would contain deltas enconded in some fashion that takes advantage of their small magnitude to make them take less space, e.g. fewer bits for exponent/mantissa, conversion to fixed-point value, Huffman encoding etc; and the header a single uncompressed sample); seeking then becomes a matter of cheaply selecting the appropriate block, then decompressing it.
If the compression ratio is so variable that much space is being wasted padding the compressed data to produce fixed size blocks, a directory of offsets into the compressed data could be built instead and the state required to decompress recorded in that.
If you know a group of doubles has the same exponent, you could store the exponent once, and only store the mantissa for each value.