Unwanted rounding C++ - c++

I have the following equations:
//get thermistor resistor value
temp=(THERMISTOR_R0)/((temp2/temp)-1);
//get temperature value in Kelvins and convert to Celsiuis
temp=(THERMISTOR_BETA)/log(temp/(THERMISTOR_R0*exp((-THERMISTOR_BETA)/298)));
temp-=273;
desiredVoltage =((15700-(25*temp))/10);
THERMISTOR_R0 and THERMISTOR _BETA are constant.
temp, temp2 and desiredVoltage are unsigned int and are defined before calculations.
The problem is, for example, when the term ((temp2/temp)-1) falls below 1, it rounds down to 0. I want to get rid of this rounding as it is causing huge problems with my calculations.
How do I do this?

It's not rounding, it's integer division. If both operands of the / operator are of integer types the behavior of C++ is to perform an integer division, which keeps only the integer part of the result (this is often needed in some algorithms because it's faster).
To get a "regular" division make sure that at least one of the operands involved is of a floating point type (float, double or long double); you can do this either declaring the variables involved as FP types
double temp2, temp;
either sticking a cast in front of one of the operands.
temp=(THERMISTOR_R0)/((double(temp2)/temp)-1);
(notice that here you'll incur in truncation if temp is still of integral type).
Most probably, here you'll simply want to declare temp and temp2 as double (or float if you are working in a really resource-tight environment).
Also, when dividing by a numeric literal, keep in mind that if you don't write the decimal point it will be an int literal, if you write it it will be a double. E.g., 298 is an int, 298. is a double, so 1/2 is 0, but 1/2. is 0.5.

Make sure that you use floating point types if you want floating point division behaviour.

The compiler rounds floating point values down.
Generally this problem is solved by adding 0.5 to the value before the conversion to an integer. You may want to explicitly use floating point values (or cast), then add 0.5 to the result before you cast back to integers.

Related

Using scientific notation in for loops

I've recently come across some code which has a loop of the form
for (int i = 0; i < 1e7; i++){
}
I question the wisdom of doing this since 1e7 is a floating point type, and will cause i to be promoted when evaluating the stopping condition. Should this be of cause for concern?
The elephant in the room here is that the range of an int could be as small as -32767 to +32767, and the behaviour on assigning a larger value than this to such an int is undefined.
But, as for your main point, indeed it should concern you as it is a very bad habit. Things could go wrong as yes, 1e7 is a floating point double type.
The fact that i will be converted to a floating point due to type promotion rules is somewhat moot: the real damage is done if there is unexpected truncation of the apparent integral literal. By the way of a "proof by example", consider first the loop
for (std::uint64_t i = std::numeric_limits<std::uint64_t>::max() - 1024; i ++< 18446744073709551615ULL; ){
std::cout << i << "\n";
}
This outputs every consecutive value of i in the range, as you'd expect. Note that std::numeric_limits<std::uint64_t>::max() is 18446744073709551615ULL, which is 1 less than the 64th power of 2. (Here I'm using a slide-like "operator" ++< which is useful when working with unsigned types. Many folk consider --> and ++< as obfuscating but in scientific programming they are common, particularly -->.)
Now on my machine, a double is an IEEE754 64 bit floating point. (Such as scheme is particularly good at representing powers of 2 exactly - IEEE754 can represent powers of 2 up to 1022 exactly.) So 18,446,744,073,709,551,616 (the 64th power of 2) can be represented exactly as a double. The nearest representable number before that is 18,446,744,073,709,550,592 (which is 1024 less).
So now let's write the loop as
for (std::uint64_t i = std::numeric_limits<std::uint64_t>::max() - 1024; i ++< 1.8446744073709551615e19; ){
std::cout << i << "\n";
}
On my machine that will only output one value of i: 18,446,744,073,709,550,592 (the number that we've already seen). This proves that 1.8446744073709551615e19 is a floating point type. If the compiler was allowed to treat the literal as an integral type then the output of the two loops would be equivalent.
It will work, assuming that your int is at least 32 bits.
However, if you really want to use exponential notation, you should better define an integer constant outside the loop and use proper casting, like this:
const int MAX_INDEX = static_cast<int>(1.0e7);
...
for (int i = 0; i < MAX_INDEX; i++) {
...
}
Considering this, I'd say it is much better to write
const int MAX_INDEX = 10000000;
or if you can use C++14
const int MAX_INDEX = 10'000'000;
1e7 is a literal of type double, and usually double is 64-bit IEEE 754 format with a 52-bit mantissa. Roughly every tenth power of 2 corresponds to a third power of 10, so double should be able to represent integers up to at least 105*3 = 1015, exactly. And if int is 32-bit then int has roughly 103*3 = 109 as max value (asking Google search it says that "2**31 - 1" = 2 147 483 647, i.e. twice the rough estimate).
So, in practice it's safe on current desktop systems and larger.
But C++ allows int to be just 16 bits, and on e.g. an embedded system with that small int, one would have Undefined Behavior.
If the intention to loop for a exact integer number of iterations, for example if iterating over exactly all the elements in an array then comparing against a floating point value is maybe not such a good idea, solely for accuracy reasons; since the implicit cast of an integer to float will truncate integers toward zero there's no real danger of out-of-bounds access, it will just abort the loop short.
Now the question is: When do these effects actually kick in? Will your program experience them? The floating point representation usually used these days is IEEE 754. As long as the exponent is 0 a floating point value is essentially an integer. C double precision floats 52 bits for the mantissa, which gives you integer precision to a value of up to 2^52, which is in the order of about 1e15. Without specifying with a suffix f that you want a floating point literal to be interpreted single precision the literal will be double precision and the implicit conversion will target that as well. So as long as your loop end condition is less 2^52 it will work reliably!
Now one question you have to think about on the x86 architecture is efficiency. The very first 80x87 FPUs came in a different package, and later a different chip and as aresult getting values into the FPU registers is a bit awkward on the x86 assembly level. Depending on what your intentions are it might make the difference in runtime for a realtime application; but that's premature optimization.
TL;DR: Is it safe to to? Most certainly yes. Will it cause trouble? It could cause numerical problems. Could it invoke undefined behavior? Depends on how you use the loop end condition, but if i is used to index an array and for some reason the array length ended up in a floating point variable always truncating toward zero it's not going to cause a logical problem. Is it a smart thing to do? Depends on the application.

Rounding in C++ and round-tripping numbers

I have a class that internally represents some quantity in fixed point as 32-bit integer with somewhat arbitrary denominator (it is neither power of 2 nor power of 10).
For communicating with other applications the quantity is converted to plain old double on output and back on input. As code inside the class it looks like:
int32_t quantity;
double GetValue() { return double(quantity) / DENOMINATOR; }
void SetValue(double x) { quantity = x * DENOMINATOR; }
Now I need to ensure that if I output some value as double and read it back, I will always get the same value back. I.e. that
x.SetValue(x.GetValue());
will never change x.quantity (x is arbitrary instance of the class containing the above code).
The double representation has more digits of precision, so it should be possible. But it will almost certainly not be the case with the simplistic code above.
What rounding do I need to use and
How can I find the critical would-be corner cases to test that the rounding is indeed correct?
Any 32 bits will be represented exactly when you convert to a double, but when you divide then multiply by an arbitrary value you will get a similar value but not exactly the same. You should lose at most one bit per operations, which means your double will be almost the same, prior to casting back to an int.
However, since int casts are truncations, you will get the wrong result when very minor errors turn 2.000 into 1.999, thus what you need to do is a simple rounding task prior to casting back.
You can use std::lround() for this if you have C++11, else you can write you own rounding function.
You probably don't care about fairness much here, so the common int(doubleVal+0.5) will work for positives. If as seems likely, you have negatives, try this:
int round(double d) { return d<0?d-0.5:d+0.5; }
The problem you describe is the same problem which exists with converting between binary and decimal representation just with different bases. At least it exists if you want to have the double representation to be a good approximation of the original value (otherwise you could just multiply the 32 bit value you have with your fixed denominator and store the result in a double).
Assuming you want the double representation be a good approximation of your actual value the conversions are nontrivial! The conversion from your internal representation to double can be done using Dragon4 ("How to print floating point numbers accurately", Steele & White) or Grisu ("How to print floating point numbers quickly and accurately", Loitsch; I'm not sure if this algorithm is independent from the base, though). The reverse can be done using Bellerophon ("How to read floating point numbers accurately", Clinger). These algorithms aren't entirely trivial, though...

double and float comparison [duplicate]

This question already has answers here:
Comparing float and double
(3 answers)
Closed 7 years ago.
According to this post, when comparing a float and a double, the float should be treated as double.
The following program, does not seem to follow this statement. The behaviour looks quite unpredictable.
Here is my program:
void main(void)
{
double a = 1.1; // 1.5
float b = 1.1; // 1.5
printf("%X %X\n", a, b);
if ( a == b)
cout << "success " <<endl;
else
cout << "fail" <<endl;
}
When I run the following program, I get "fail" displayed.
However, when I change a and b to 1.5, it displays "success".
I have also printed the hex notations of the values. They are different in both the cases. My compiler is Visual Studio 2005
Can you explain this output ? Thanks.
float f = 1.1;
double d = 1.1;
if (f == d)
In this comparison, the value of f is promoted to type double. The problem you're seeing isn't in the comparison, but in the initialization. 1.1 can't be represented exactly as a floating-point value, so the values stored in f and d are the nearest value that can be represented. But float and double are different sizes, so have a different number of significant bits. When the value in f is promoted to double, there's no way to get back the extra bits that were lost when the value was stored, so you end up with all zeros in the extra bits. Those zero bits don't match the bits in d, so the comparison is false. And the reason the comparison succeeds with 1.5 is that 1.5 can be represented exactly as a float and as a double; it has a bunch of zeros in its low bits, so when the promotion adds zeros the result is the same as the double representation.
I found a decent explanation of the problem you are experiencing as well as some solutions.
See How dangerous is it to compare floating point values?
Just a side note, remember that some values can not be represented EXACTLY in IEEE 754 floating point representation. Your same example using a value of say 1.5 would compare as you expect because there is a perfect representation of 1.5 without any loss of data. However, 1.1 in 32-bit and 64-bit are in fact different values because the IEEE 754 standard can not perfectly represent 1.1.
See http://www.binaryconvert.com
double a = 1.1 --> 0x3FF199999999999A
Approximate representation = 1.10000000000000008881784197001
float b = 1.1 --> 0x3f8ccccd
Approximate representation = 1.10000002384185791015625
As you can see, the two values are different.
Also, unless you are working in some limited memory type environment, it's somewhat pointless to use floats. Just use doubles and save yourself the headaches.
If you are not clear on why some values can not be accurately represented, consult a tutorial on how to covert a decimal to floating point.
Here's one: http://class.ece.iastate.edu/arun/CprE281_F05/ieee754/ie5.html
I would regard code which directly performs a comparison between a float and a double without a typecast to be broken; even if the language spec says that the float will be implicitly converted, there are two different ways that the comparison might sensibly be performed, and neither is sufficiently dominant to really justify a "silent" default behavior (i.e. one which compiles without generating a warning). If one wants to perform a conversion by having both operands evaluated as double, I would suggest adding an explicit type cast to make one's intentions clear. In most cases other than tests to see whether a particular double->float conversion will be reversible without loss of precision, however, I suspect that comparison between float values is probably more appropriate.
Fundamentally, when comparing floating-point values X and Y of any sort, one should regard comparisons as indicating that X or Y is larger, or that the numbers are "indistinguishable". A comparison which shows X is larger should be taken to indicate that the number that Y is supposed to represent is probably smaller than X or close to X. A comparison that says the numbers are indistinguishable means exactly that. If one views things in such fashion, comparisons performed by casting to float may not be as "informative" as those done with double, but are less likely to yield results that are just plain wrong. By comparison, consider:
double x, y;
float f = x;
If one compares f and y, it's possible that what one is interested in is how y compares with the value of x rounded to a float, but it's more likely that what one really wants to know is whether, knowing the rounded value of x, whether one can say anything about the relationship between x and y. If x is 0.1 and y is 0.2, f will have enough information to say whether x is larger than y; if y is 0.100000001, it will not. In the latter case, if both operands are cast to double, the comparison will erroneously imply that x was larger; if they are both cast to float, the comparison will report them as indistinguishable. Note that comparison results when casting both operands to double may be erroneous not only when values are within a part per million; they may be off by hundreds of orders of magnitude, such as if x=1e40 and y=1e300. Compare f and y as float and they'll compare indistinguishable; compare them as double and the smaller value will erroneously compare larger.
The reason why the rounding error occurs with 1.1 and not with 1.5 is due to the number of bits required to accurately represent a number like 0.1 in floating point format. In fact an accurate representation is not possible.
See How To Represent 0.1 In Floating Point Arithmetic And Decimal for an example, particularly the answer by #paxdiablo.

multiplication of double with integer precision

I have a double of 3.4. However, when I multiply it with 100, it gives 339 instead of 340. It seems to be caused by the precision of double. How could I get around this?
Thanks
First what is going on:
3.4 can't be represented exactly as binary fraction. So the implementation chooses closest binary fraction that is representable. I am not sure whether it always rounds towards zero or not, but in your case the represented number is indeed smaller.
The conversion to integer truncates, that is uses the closest integer with smaller absolute value.
Since both conversions are biased in the same direction, you can always get a rounding error.
Now you need to know what you want, but probably you want to use symmetrical rounding, i.e. find the closest integer be it smaller or larger. This can be implemented as
#include <cmath>
int round(double x) { std::floor(x + 0.5); } // floor is provided, round not
or
int round(double x) { return x < 0 ? x - 0.5 : x + 0.5; }
I am not completely sure it's indeed rounding towards zero, so please verify the later if you use it.
If you need full precision, you might want to use something like Boost.Rational.
You could use two integers and multiply the fractional part by multiplier / 10.
E.g
int d[2] = {3,4};
int n = (d[0] * 100) + (d[1] * 10);
If you really want all that precision either side of the decimal point. Really does depend on the application.
Floating-point values are seldom exact. Unfortunately, when casting a floating-point value to an integer in C, the value is rounded towards zero. This mean that if you have 339.999999, the result of the cast will be 339.
To overcome this, you could add (or subtract) "0.5" from the value. In this case 339.99999 + 0.5 => 340.499999 => 340 (when converted to an int).
Alternatively, you could use one of the many conversion functions provided by the standard library.
You don't have a double with the value of 3.4, since 3.4 isn't
representable as a double (at least on the common machines, and
most of the exotics as well). What you have is some value very
close to 3.4. After multiplication, you have some value very
close to 340. But certainly not 399.
Where are you seeing the 399? I'm guessing that you're simply
casting to int, using static_cast, because this operation
truncates toward zero. Other operations would likely do what
you want: outputting in fixed format with 0 positions after the
decimal, for example, rounds (in an implementation defined
manner, but all of the implementations I know use round to even
by default); the function round rounds to nearest, rounding
away from zero in halfway cases (but your results will not be
anywhere near a halfway case). This is the rounding used in
commercial applications.
The real question is what are you doing that requires an exact
integral value. Depending on the application, it may be more
appropriate to use int or long, scaling the actual values as
necessary (i.e. storing 100 times the actual value, or
whatever), or some sort of decimal arithmetic package, rather
than to use double.

Does casting to an int after std::floor guarantee the right result?

I'd like a floor function with the syntax
int floor(double x);
but std::floor returns a double. Is
static_cast <int> (std::floor(x));
guaranteed to give me the correct integer, or could I have an off-by-one problem? It seems to work, but I'd like to know for sure.
For bonus points, why the heck does std::floor return a double in the first place?
The range of double is way greater than the range of 32 or 64 bit integers, which is why std::floor returns a double. Casting to int should be fine so long as it's within the appropriate range - but be aware that a double can't represent all 64 bit integers exactly, so you may also end up with errors when you go beyond the point at which the accuracy of double is such that the difference between two consecutive doubles is greater than 1.
static_cast <int> (std::floor(x));
does pretty much what you want, yes. It gives you the nearest integer, rounded towards -infinity. At least as long as your input is in the range representable by ints.
I'm not sure what you mean by 'adding .5 and whatnot, but it won't have the same effect
And std::floor returns a double because that's the most general. Sometimes you might want to round off a float or double, but preserve the type. That is, round 1.3f to 1.0f, rather than to 1.
That'd be hard to do if std::floor returned an int. (or at least you'd have an extra unnecessary cast in there slowing things down).
If floor only performs the rounding itself, without changing the type, you can cast that to int if/when you need to.
Another reason is that the range of doubles is far greater than that of ints. It may not be possible to round all doubles to ints.
The C++ standard says (4.9.1):
"An rvalue of a floating point type can be converted to an rvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type".
So if you are converting a double to an int, the number is within the range of int and the required rounding-up is toward zero, then it is enough to simply cast the number to int:
(int)x;
If you want to deal with various numeric conditions and want to handle different types of conversions in a controlled way, then maybe you should look at the Boost.NumericConversion. This library allows to handle weird cases (like out-of-range, rounding, ranges, etc.)
Here is the example from the documentation:
#include <cassert>
#include <boost/numeric/conversion/converter.hpp>
int main() {
typedef boost::numeric::converter<int,double> Double2Int ;
int x = Double2Int::convert(2.0);
assert ( x == 2 );
int y = Double2Int()(3.14); // As a function object.
assert ( y == 3 ) ; // The default rounding is trunc.
try
{
double m = boost::numeric::bounds<double>::highest();
int z = Double2Int::convert(m); // By default throws positive_overflow()
}
catch ( boost::numeric::positive_overflow const& )
{
}
return 0;
}
Most of the standard math library uses doubles but provides float versions as well. std::floorf() is the single precision version of std::floor() if you'd prefer not to use doubles.
Edit: I've removed part of my previous answer. I had stated that the floor was redundant when casting to int, but I forgot that this is only true for positive floating point numbers.