I'm developing for a platform without a math library, so I need to build my own tools. My current way of getting the fraction is to convert the float to fixed point (multiply with (float)0xFFFF, cast to int), get only the lower part (mask with 0xFFFF) and convert it back to a float again.
However, the imprecision is killing me. I'm using my Frac() and InvFrac() functions to draw an anti-aliased line. Using modf I get a perfectly smooth line. With my own method pixels start jumping around due to precision loss.
This is my code:
const float fp_amount = (float)(0xFFFF);
const float fp_amount_inv = 1.f / fp_amount;
inline float Frac(float a_X)
{
return ((int)(a_X * fp_amount) & 0xFFFF) * fp_amount_inv;
}
inline float Frac(float a_X)
{
return (0xFFFF - (int)(a_X * fp_amount) & 0xFFFF) * fp_amount_inv;
}
Thanks in advance!
If I understand your question correctly, you just want the part after the decimal right? You don't need it actually in a fraction (integer numerator and denominator)?
So we have some number, say 3.14159 and we want to end up with just 0.14159. Assuming our number is stored in float f;, we can do this:
f = f-(long)f;
Which, if we insert our number, works like this:
0.14159 = 3.14159 - 3;
What this does is remove the whole number portion of the float leaving only the decimal portion. When you convert the float to a long, it drops the decimal portion. Then when you subtract that from your original float, you're left with only the decimal portion. We need to use a long here because of the size of the float type (8 bytes on most systems). An integer (only 4 bytes on many systems) isn't necessarily large enough to cover the same range of numbers as a float, but a long should be.
As I suspected, modf does not use any arithmetic per se -- it's all shifts and masks, take a look here. Can't you use the same ideas on your platform?
I would recommend taking a look at how modf is implemented on the systems you use today. Check out uClibc's version.
http://git.uclibc.org/uClibc/tree/libm/s_modf.c
(For legal reasons, it appears to be BSD licensed, but you'd obviously want to double check)
Some of the macros are defined here.
There's a bug in your constants. You're basically trying to do a left shift of the number by 16 bits, mask off everything but the lower bits, then right shift by 16 bits again. Shifting is the same as multiplying by a power of 2, but you're not using a power of 2 - you're using 0xFFFF, which is off by 1. Replacing this with 0x10000 will make the formula work as intended.
I'm not completly sure, but I think that what you are doing is wrong, since you are only considering the mantissa and forgetting the exponent completely.
You need to use the exponent to shift the value in the mantissa to find the actual integer part.
For a description of the storage mechanism of 32bit floats, take a look here.
Why go to floating point at all for your line drawing? You could just stick to your fixed point version and use an integer/fixed point based line drawing routine instead - Bresenham's comes to mind. While this version isn't aliased, I know there are others that are.
Bresenham's line drawing
Seems like maybe you want this.
float f = something;
float fractionalPart = f - floor(f);
Your method is assuming that there are 16 bits in the fractional part (and as Mark Ransom notes, that means you should shift by 16 bits, i.e. multiply by by 0x1000). That might not be true. The exponent is what determines how many bit there are in the fractional part.
To put this in a formula, your method works by calculating (x modf 1.0) as ((x << 16) mod 1<<16) >> 16, and it's that hardcoded 16 which should depend on the exponent - the exact replacement depends on your float format.
double frac(double val)
{
return val - trunc(val);
}
// frac(1.0) = 1.0 - 1.0 = 0.0 correct
// frac(-1.0) = -1.0 - -1.0 = 0.0 correct
// frac(1.4) = 1.4 - 1.0 = 0.4 correct
// frac(-1.4) = -1.4 - -1.0 = -0.4 correct
Simple and works for -ve and +ve
One option is to use fmod(x, 1).
Related
First of all, IEEE754 half-precision floating point number uses 16 bits. It uses 1 bit sign, 5 bits exponent, and 10 bit mantissa. actual value can be calculated to be sign * 2^(exponent-15) * (1+mantisa/1024).
I'm trying to run a image detection program using half precision. The original program is using single precision (=float). I'm using the half precision class in http://half.sourceforge.net/. Using the class half, I can run the same program at least.(by using half instead of float and compiling with g++ instead of gcc, and after many many type castings..)
I found a problem where multiplication seems to be wrong.
here is the sample code to see the problem (To print half precision number, I should cast it to float to see the value. and automatic casting doesn't take place in operations of half and integer so I put some castings..) :
#include <stdio.h>
#include "half.h"
using half_float::half;
typedef half Dtype;
main()
{
#if 0 // method 0 : this makes sx 600, which is wrong.
int c = 325;
Dtype w_scale = (Dtype)1.847656;
Dtype sx = Dtype(c*w_scale);
printf("sx = %f\n", (float)sx); // <== shows 600.000 which is wrong.
#else // method 1, which also produces wrong result..
int c = 325;
Dtype w_scale = (Dtype)1.847656;
Dtype sx = (Dtype)((Dtype)c*w_scale);
printf("sx = %f\n", (float)sx);
printf("w_scale specified as 1.847656 was 0x%x\n", *(unsigned short *)&w_scale);
#endif
}
The result looks like this :
w_scale = 0x3f63
sx = 600
sx = 0x60b0
But the sx should be 325 * 1.847656 = 600.4882. What can be wrong?
ADD : When I first posted this question, I didn't expect the value to be exactly 600.4882 but somewhere close to it. I later found the half precision, with its limitation of expressing only 3~4 effective digits, the closest value of the multication just turned out to be just 600.00. Though everybody knows floating point has this kind of limitations, some people will make a mistake like me by overlooking the fact that half-precision can have only 3~4 effective digits. So I think this question is worth a look-at by future askers. (In stackoverflow, I think some people just take every questions as the same old question when it's actually a slightly different cases. ANd it doesn't harm to have a couple of similar questions.)
I figured it out why. The half-precision has an effective precision of approx log10(2^10) ~ 3 or 4 digits. I wanted the sx to be printed as 600.488 or something close but this cannot be represented using half-precision.
This part came during the image preprocessing that can be done without 16 bit precision (our tentative hardware), so I can just use float operation for this stage.
ADD : this anomaly came during image dimension calculation, and we don't have any reason to use 16 bit float for this case. Just image data (pixel, or feature map data) should use 16 bit float. Having written this, it's a general rule.
so i have 2 functions that should do the same thing
float ver1(float a0, float a1) {
float r0 = a0 - a1;
if (abs(r0) > PI) {
if (r0 > 0) {
r0 -= PI2;
} else {
r0 += PI2;
}
}
return r0;
}
float ver2(float a0, float a1) {
float a2 = a1 - PI2;
float r0 = a0 - a1;
float r1 = a0 - a2;
if (abs(r0) < abs(r1)) {
return r0;
}
if (abs(r0) > abs(r1)) {
return r1;
}
return 0;
}
note: PI and PI2 are float constants of pi and 2*pi
The odd thing is that sometimes they produce different results, for example if you feed them 0.28605145 and 5.9433694 then the first one results in 0.62586737 and the second one in 0.62586755 and i cant figure out whats causing this.
If you manually calculate what the result should be you'll find that the second answer is correct. This function i use in a 2d physical sim and the really odd thing is that the first answer (the wrong one) works there while the second one (the right one) makes it act all kinds of crazy. Such a tiny difference from an unknown source and such a profound effect :|
At this point im switchign to matrices anyway but this odd situation got me curious, anybody know whats going on?
float typically has a precision of about 24 bits, or about 7 decimal places.
You are subtracting two numbers of similar magnitude (r0+PI2 in the first, a1-PI2 in the second), and so are experiencing loss of significance - several of the most significant bits of the result are zero, so there are fewer bits left to represent the difference. That is why the answers match to only about 6 decimal places.
If you need more precision, then a double or a 32-bit or larger fixed-point representation might be more suitable than a float. There are also arbitrary-precision libraries available, such as GMP, which can represent numbers with all the precision you need, although arithmetic will be significantly slower than with built-in types.
You should use fabs() function instead of abs() because abs() only works with integer numbers. You'll get weird and wrong results when using abs() with floating points.
Floating point numbers don't behave like mathematical real numbers. Every sum of 2 may result in a "error". So I wouldn't call the first correct and the second incorrect just because of one example. You need to be careful of every action you do with floats if you want to keep the error small.
The error is generally smaller if the abs of the numbers are in the same range.
And if the ranges are different the error tend to be bigger.
For example 10000000.0 + 0.1 - 10000000.0 is hardly ever 0.1.
If you know the ranges of the input you can adjust the code to reduce errors.
This question already has answers here:
Write your own implementation of math's floor function, C
(5 answers)
Closed 1 year ago.
I would like to know how to write my own floor function to round a float down.
Is it possible to do this by setting the bits of a float that represent the numbers after the comma to 0?
If yes, then how can I access and modify those bits?
Thanks.
You can do bit twiddling on floating point numbers, but getting it right depends on knowing exactly what the floating point binary representation is. For most machines these days its IEEE-754, which is reasonably straight-forward. For example IEEE-754 32-bit floats have 1 sign bit, 8 exponent bits, and 23 mantissa bits, so you can use shifts and masks to extract those fields and do things with them. So doing trunc (round to integer towards 0) is pretty easy:
float trunc(float x) {
union {
float f;
uint32_t i;
} val;
val.f = x;
int exponent = (val.i >> 23) & 0xff; // extract the exponent field;
int fractional_bits = 127 + 23 - exponent;
if (fractional_bits > 23) // abs(x) < 1.0
return 0.0;
if (fractional_bits > 0)
val.i &= ~((1U << fractional_bits) - 1);
return val.f;
}
First, we extract the exponent field, and use that to calculate how many bits after the
decimal point are present in the number. If there are more than the size of the mantissa, then we just return 0. Otherwise, if there's at least 1, we mask off (clear) that many low bits. Pretty simple. We're ignoring denormal, NaN, and infinity her, but that works out ok, as they have exponents of all 0s or all 1s, which means we end up converting denorms to 0 (they get caught in the first if, along with small normal numbers), and leaving NaN/Inf unchanged.
To do a floor, you'd also need to look at the sign, and rounds negative numbers 'up' towards negative infinity.
Note that this is almost certainly slower than using dedicated floating point intructions, so this sort of thing is really only useful if you need to use floating point numbers on hardware that has no native floating point support. Or if you just want to play around and learn how these things work at a low level.
Define from scratch. And no, setting the bits of your floating point number representing the numbers after the comma to 0 will not work. If you look at IEEE-754, you will see that you basically have all your floating-point numbers in the form:
0.xyzxyzxyz 2^(abc)
So to implement flooring, you can get the xyzxyzxyz and shift left by abc+1 times. Drop the rest. I suggest you read up on the binary representation of a floating point number (link above), this should shed light on the solution I suggested.
NOTE: You also need to take care of the sign bit. And the mantissa of your number is off by 127.
Here is an example, Let's say you have the number pi: 3.14..., you want to get 3.
Pi is represented in binary as
0 10000000 10010010000111111011011
This translate to
sign = 0 ; e = 1 ; s = 110010010000111111011011
The above I get directly from Wikipedia. Since e is 1. You will want to shift left s by 1 + 1 = 2, so you get 11 => 3.
#include <iostream>
#include <iomanip>
double round(double input, double roundto) {
return int(input / roundto) * roundto;
}
int main() {
double pi = 3.1415926353898;
double almostpi = round(pi, 0.0001);
std::cout << std::setprecision(14) << pi << '\n' << std::setprecision(14) << almostpi;
}
http://ideone.com/mdqFA
output:
3.1415926353898
3.1415
This will pretty much be faster than any bit twiddling you can come up with. And it works on all computers (with floats) instead of just one type.
Casting to unsigned while returning as a double does what you are seeking, but under the hood. This simple piece of code works for any POSITIVE number.
#include <iostream>
double floor(const double& num) {
return (unsigned long long) num;
}
This has been tested on tio.run (Try It Online) and onlinegdb.com. The function itself doesn't require any #include files, but to print out the answers, I have included stdio.h (in the tio.run and onlinegdb.com, not here). Here it is:
long double myFloor(long double x) /* Change this to your liking: long double might
be float in your situation. */
{
long double xcopy=x<0?x*-1:x;
unsigned int zeros=0;
long double n=1;
for(n=1;xcopy>n*10;n*=10,++zeros);
for(xcopy-=n;zeros!=-1;xcopy-=n)
if(xcopy<0)
{
xcopy+=n;
n/=10;
--zeros;
}
xcopy+=n;
return x<0?(xcopy==0?x:x-(1-xcopy)):(x-xcopy);
}
This function works everywhere (pretty sure) because it just removes all of the non-decimal parts instead of trying to work with the parts of floats.
The floor of a floating point number is the biggest integer less than or equal to it. Here are a some examples:
floor(5.7) = 5
floor(3) = 3
floor(9.9) = 9
floor(7.0) = 7
floor(-7.9) = -8
floor(-5.0) = -5
floor(-3.3) = -3
floor(0) = 0
floor(-0.0) = -0
floor(-0) = -0
Note: this is almost an exact copy from my other answer which answered a question that was basically the same as this one.
For example, this blog says 0.005 is not exactly 0.005, but rounding that number yields the right result.
I have tried all kinds of rounding in C++ and it fails when rounding numbers to certain decimal places. For example, Round(x,y) rounds x to a multiple of y. So Round(37.785,0.01) should give you 37.79 and not 37.78.
I am reopening this question to ask the community for help. The problem is with the impreciseness of floating point numbers (37,785 is represented as 37.78499999999).
The question is how does Excel get around this problem?
The solution in this round() for float in C++ is incorrect for the above problem.
"Round(37.785,0.01) should give you 37.79 and not 37.78."
First off, there is no consensus that 37.79 rather than 37.78 is the "right" answer here? Tie-breakers are always a bit tough. While always rounding up in the case of a tie is a widely-used approach, it certainly is not the only approach.
Secondly, this isn't a tie-breaking situation. The numerical value in the IEEE binary64 floating point format is 37.784999999999997 (approximately). There are lots of ways to get a value of 37.784999999999997 besides a human typing in a value of 37.785 and happen to have that converted to that floating point representation. In most of these cases, the correct answer is 37.78 rather than 37.79.
Addendum
Consider the following Excel formulae:
=ROUND(37785/1000,2)
=ROUND(19810222/2^19+21474836/2^47,2)
Both cells will display the same value, 37.79. There is a legitimate argument over whether 37785/1000 should round to 37.78 or 37.79 with two place accuracy. How to deal with these corner cases is a bit arbitrary, and there is no consensus answer. There isn't even a consensus answer inside Microsoft: "the Round() function is not implemented in a consistent fashion among different Microsoft products for historical reasons." ( http://support.microsoft.com/kb/196652 ) Given an infinite precision machine, Microsoft's VBA would round 37.785 to 37.78 (banker's round) while Excel would yield 37.79 (symmetric arithmetic round).
There is no argument over the rounding of the latter formula. It is strictly less than 37.785, so it should round to 37.78, not 37.79. Yet Excel rounds it up. Why?
The reason has to do with how real numbers are represented in a computer. Microsoft, like many others, uses the IEEE 64 bit floating point format. The number 37785/1000 suffers from precision loss when expressed in this format. This precision loss does not occur with 19810222/2^19+21474836/2^47; it is an "exact number".
I intentionally constructed that exact number to have the same floating point representation as does the inexact 37785/1000. That Excel rounds this exact value up rather than down is the key to determining how Excel's ROUND() function works: It is a variant of symmetric arithmetic rounding. It rounds based on a comparison to the floating point representation of the corner case.
The algorithm in C++:
#include <cmath> // std::floor
// Compute 10 to some positive integral power.
// Dealing with overflow (exponent > 308) is an exercise left to the reader.
double pow10 (unsigned int exponent) {
double result = 1.0;
double base = 10.0;
while (exponent > 0) {
if ((exponent & 1) != 0) result *= base;
exponent >>= 1;
base *= base;
}
return result;
}
// Round the same way Excel does.
// Dealing with nonsense such as nplaces=400 is an exercise left to the reader.
double excel_round (double x, int nplaces) {
bool is_neg = false;
// Excel uses symmetric arithmetic round: Round away from zero.
// The algorithm will be easier if we only deal with positive numbers.
if (x < 0.0) {
is_neg = true;
x = -x;
}
// Construct the nearest rounded values and the nasty corner case.
// Note: We really do not want an optimizing compiler to put the corner
// case in an extended double precision register. Hence the volatile.
double round_down, round_up;
volatile double corner_case;
if (nplaces < 0) {
double scale = pow10 (-nplaces);
round_down = std::floor (x * scale);
corner_case = (round_down + 0.5) / scale;
round_up = (round_down + 1.0) / scale;
round_down /= scale;
}
else {
double scale = pow10 (nplaces);
round_down = std::floor (x / scale);
corner_case = (round_down + 0.5) * scale;
round_up = (round_down + 1.0) * scale;
round_down *= scale;
}
// Round by comparing to the corner case.
x = (x < corner_case) ? round_down : round_up;
// Correct the sign if needed.
if (is_neg) x = -x;
return x;
}
For very accurate arbitrary precision and rounding of floating point numbers to a fixed set of decimal places, you should take a look at a math library like GNU MPFR. While it's a C-library, the web-page I posted also links to a couple different C++ bindings if you want to avoid using C.
You may also want to read a paper entitled "What every computer scientist should know about floating point arithmetic" by David Goldberg at the Xerox Palo Alto Research Center. It's an excellent article demonstrating the underlying process that allows floating point numbers to be approximated in a computer that represents everything in binary data, and how rounding errors and other problems can creep up in FPU-based floating point math.
I don't know how Excel does it, but printing floating point numbers nicely is a hard problem: http://www.serpentine.com/blog/2011/06/29/here-be-dragons-advances-in-problems-you-didnt-even-know-you-had/
So your actual question seems to be, how to get correctly rounded floating point -> string conversions. By googling for those terms you'll get a bunch of articles, but if you're interested in something to use, most platforms provide reasonably competent implementations of sprintf()/snprintf(). So just use those, and if you find bugs, file a report to the vendor.
A function that takes a floating point number as argument and returns another floating point number, rounded exactly to a given number of decimal digits cannot be written, because there are many numbers with a finite decimal representation that have an infinite binary representation; one of the simplest examples is 0.1 .
To achieve what you want you must accept to use a different type as a result of your rounding function. If your immediate need is printing the number you can use a string and a formatting function: the problem becomes how to obtain exactly the formatting you expect. Otherwise if you need to store this number in order to perform exact calculations on it, for instance if you are doing accounting, you need a library that's capable of representing decimal numbers exactly. In this case the most common approach is to use a scaled representation: an integer for the value together with the number of decimal digits. Dividing the value by ten raised to the scale gives you the original number.
If any of these approaches is suitable, I'll try and expand my answer with practical suggestions.
Excel rounds numbers like this "correctly" by doing WORK. They started in 1985, with a fairly "normal" set of floating-point routines, and added some scaled-integer fake floating point, and they've been tuning those things and adding special cases ever since. The app DID used to have most of the same "obvious" bugs that everybody else did, it's just that it mostly had them a long time ago. I filed a couple myself, back when I was doing tech support for them in the early 90s.
I believe the following C# code rounds numbers as they are rounded in Excel. To exactly replicate the behavior in C++ you might need to use a special decimal type.
In plain English, the double-precision number is converted to a decimal and then rounded to fifteen significant digits (not to be confused with fifteen decimal places). The result is rounded a second time to the specified number of decimal places.
That might seem weird, but what you have to understand is that Excel always displays numbers that are rounded to 15 significant figures. If the ROUND() function weren't using that display value as a starting point, and used the internal double representation instead, then there would be cases where ROUND(A1,N) did not seem to correspond to the actual value in A1. That would be very confusing to a non-technical user.
The double which is closest to 37.785 has an exact decimal value of 37.784999999999996589394868351519107818603515625. (Any double can be represented precisely by a finite base ten decimal because one quarter, one eighth, one sixteenth, and so forth all have finite decimal expansions.) If that number were rounded directly to two decimal places, there would be no tie to break and the result would be 37.78. If you round to 15 significant figures first you get 37.7850000000000. If this is further rounded to two decimal places, then you get 37.79, so there is no real mystery after all.
// Convert to a floating decimal point number, round to fifteen
// significant digits, and then round to the number of places
// indicated.
static decimal SmartRoundDouble(double input, int places)
{
int numLeadingDigits = (int)Math.Log10(Math.Abs(input)) + 1;
decimal inputDec = GetAccurateDecimal(input);
inputDec = MoveDecimalPointRight(inputDec, -numLeadingDigits);
decimal round1 = Math.Round(inputDec, 15);
round1 = MoveDecimalPointRight(round1, numLeadingDigits);
decimal round2 = Math.Round(round1, places, MidpointRounding.AwayFromZero);
return round2;
}
static decimal MoveDecimalPointRight(decimal d, int n)
{
if (n > 0)
for (int i = 0; i < n; i++)
d *= 10.0m;
else
for (int i = 0; i > n; i--)
d /= 10.0m;
return d;
}
// The constructor for decimal that accepts a double does
// some rounding by default. This gets a more exact number.
static decimal GetAccurateDecimal(double r)
{
string accurateStr = r.ToString("G17", CultureInfo.InvariantCulture);
return Decimal.Parse(accurateStr, CultureInfo.InvariantCulture);
}
What you NEED is this :
double f = 22.0/7.0;
cout.setf(ios::fixed, ios::floatfield);
cout.precision(6);
cout<<f<<endl;
How it can be implemented (just a overview for rounding last digit)
:
long getRoundedPrec(double d, double precision = 9)
{
precision = (int)precision;
stringstream s;
long l = (d - ((double)((int)d)))* pow(10.0,precision+1);
int lastDigit = (l-((l/10)*10));
if( lastDigit >= 5){
l = l/10 +1;
}
return l;
}
Just as base-10 numbers must be rounded as they are converted to base-2, it is possible to round a number as it is converted from base-2 to base-10. Once the number has a base-10 representation it can be rounded again in a straightforward manner by looking at the digit to the right of the one you wish to round.
While there's nothing wrong with the above assertion, there's a much more pragmatic solution. The problem is that the binary representation tries to get as close as possible to the decimal number, even if that binary is less than the decimal. The amount of error is within [-0.5,0.5] least significant bits (LSB) of the true value. For rounding purposes you'd rather it be within [0,1] LSB so that the error is always positive, but that's not possible without changing all the rules of floating point math.
The one thing you can do is add 1 LSB to the value, so the error is within [0.5,1.5] LSB of the true value. This is less accurate overall, but only by a very tiny amount; when the value is rounded for representation as a decimal number it is much more likely to be rounded to a proper decimal number because the error is always positive.
To add 1 LSB to the value before rounding it, see the answers to this question. For example in Visual Studio C++ 2010 the procedure would be:
Round(_nextafter(37.785,37.785*1.1),0.01);
There are many ways to optimize the result of a floating-point value using statistical, numerical... algorithms
The easiest one is probably searching for repetitive 9s or 0s in the range of precision. If there are any, maybe those 9s are redundant, just round them up. But this may not work in many cases. Here's an example for a float with 6 digits of precision:
2.67899999 → 2.679
12.3499999 → 12.35
1.20000001 → 1.2
Excel always limits the input range to 15 digits and rounds the output to maximum 15 digits so this might be one of the way Excel uses
Or you can include the precision along with the number. After each step, adjust the accuracy depend on the precision of operands. For example
1.113 → 3 decimal digits
6.15634 → 5 decimal digits
Since both number are inside the double's 16-17 digits precision range, their sum will be accurate to the larger of them, which is 5 digits. Similarly, 3+5 < 16, so their product will be precise to 8 decimal numbers
1.113 + 6.15634 = 7.26934 → 5 decimal digits
1.113 * 6.15634 = 6.85200642 → 8 decimal digits
But 4.1341677841 * 2.251457145 will only take double's accuracy because the real result exceed double's precision
Another efficient algorithm is Grisu but I haven't had an opportunity to try.
In 2010, Florian Loitsch published a wonderful paper in PLDI, "Printing floating-point numbers quickly and accurately with integers", which represents the biggest step in this field in 20 years: he mostly figured out how to use machine integers to perform accurate rendering! Why do I say "mostly"? Because although Loitsch's "Grisu3" algorithm is very fast, it gives up on about 0.5% of numbers, in which case you have to fall back to Dragon4 or a derivative
Here be dragons: advances in problems you didn’t even know you had
In fact I think Excel must combine many different methods to achieve the best result of all
Example When a Value Reaches Zero
In Excel 95 or earlier, enter the following into a new workbook:
A1: =1.333+1.225-1.333-1.225
Right-click cell A1, and then click Format Cells. On the Number tab, click Scientific under Category. Set the Decimal places to 15.
Rather than displaying 0, Excel 95 displays -2.22044604925031E-16.
Excel 97, however, introduced an optimization that attempts to correct for this problem. Should an addition or subtraction operation result in a value at or very close to zero, Excel 97 and later will compensate for any error introduced as a result of converting an operand to and from binary. The example above when performed in Excel 97 and later correctly displays 0 or 0.000000000000000E+00 in scientific notation.
Floating-point arithmetic may give inaccurate results in Excel
As mjfgates says, Excel does hard work to get this "right". The first thing to do when you try to reimplement this, is define what you mean by "right". Obvious solutions:
implement rational arithmetic
Slow but reliable.
implement a bunch of heuristics
Fast but tricky to get right (think "years of bug reports").
It really depends on your application.
Most decimal fractions can't be accurately represented in binary.
double x = 0.0;
for (int i = 1; i <= 10; i++)
{
x += 0.1;
}
// x should now be 1.0, right?
//
// it isn't. Test it and see.
One solution is to use BCD. It's old. But, it's also tried and true. We have a lot of other old ideas that we use every day (like using a 0 to represent nothing...).
Another technique uses scaling upon input/output. This has the advantage of nearly all math being integer math.
I'm trying to optimize the following. The code bellow does this :
If a = 0.775 and I need precision 2 dp then a => 0.78
Basically, if the last digit is 5, it rounds upwards the next digit, otherwise it doesn't.
My problem was that 0.45 doesnt round to 0.5 with 1 decimalpoint, as the value is saved as 0.44999999343.... and setprecision rounds it to 0.4.
Thats why setprecision is forced to be higher setprecision(p+10) and then if it really ends in a 5, add the small amount in order to round up correctly.
Once done, it compares a with string b and returns the result. The problem is, this function is called a few billion times, making the program craw. Any better ideas on how to rewrite / optimize this and what functions in the code are so heavy on the machine?
bool match(double a,string b,int p) { //p = precision no greater than 7dp
double t[] = {0.2, 0.02, 0.002, 0.0002, 0.00002, 0.000002, 0.0000002, 0.00000002};
stringstream buff;
string temp;
buff << setprecision(p+10) << setiosflags(ios_base::fixed) << a; // 10 decimal precision
buff >> temp;
if(temp[temp.size()-10] == '5') a += t[p]; // help to round upwards
ostringstream test;
test << setprecision(p) << setiosflags(ios_base::fixed) << a;
temp = test.str();
if(b.compare(temp) == 0) return true;
return false;
}
I wrote an integer square root subroutine with nothing more than a couple dozen lines of ASM, with no API calls whatsoever - and it still could only do about 50 million SqRoots/second (this was about five years ago ...).
The point I'm making is that if you're going for billions of calls, even today's technology is going to choke.
But if you really want to make an effort to speed it up, remove as many API usages as humanly possible. This may require you to perform API tasks manually, instead of letting the libraries do it for you. Specifically, remove any type of stream operation. Those are slower than dirt in this context. You may really have to improvise there.
The only thing left to do after that is to replace as many lines of C++ as you can with custom ASM - but you'll have to be a perfectionist about it. Make sure you are taking full advantage of every CPU cycle and register - as well as every byte of CPU cache and stack space.
You may consider using integer values instead of floating-points, as these are far more ASM-friendly and much more efficient. You'd have to multiply the number by 10^7 (or 10^p, depending on how you decide to form your logic) to move the decimal all the way over to the right. Then you could safely convert the floating-point into a basic integer.
You'll have to rely on the computer hardware to do the rest.
<--Microsoft Specific-->
I'll also add that C++ identifiers (including static ones, as Donnie DeBoer mentioned) are directly accessible from ASM blocks nested into your C++ code. This makes inline ASM a breeze.
<--End Microsoft Specific-->
Depending on what you want the numbers for, you might want to use fixed point numbers instead of floating point. A quick search turns up this.
I think you can just add 0.005 for precision to hundredths, 0.0005 for thousands, etc. snprintf the result with something like "%1.2f" (hundredths, 1.3f thousandths, etc.) and compare the strings. You should be able to table-ize or parameterize this logic.
You could save some major cycles in your posted code by just making that double t[] static, so that it's not allocating it over and over.
Try this instead:
#include <cmath>
double setprecision(double x, int prec) {
return
ceil( x * pow(10,(double)prec) - .4999999999999)
/ pow(10,(double)prec);
}
It's probably faster. Maybe try inlining it as well, but that might hurt if it doesn't help.
Example of how it works:
2.345* 100 (10 to the 2nd power) = 234.5
234.5 - .4999999999999 = 234.0000000000001
ceil( 234.0000000000001 ) = 235
235 / 100 (10 to the 2nd power) = 2.35
The .4999999999999 was chosen because of the precision for a c++ double on a 32 bit system. If you're on a 64 bit platform you'll probably need more nines. If you increase the nines further on a 32 bit system it overflows and rounds down instead of up, i. e. 234.00000000000001 gets truncated to 234 in a double in (my) 32 bit environment.
Using floating point (an inexact representation) means you've lost some information about the true number. You can't simply "fix" the value stored in the double by adding a fudge value. That might fix certain cases (like .45), but it will break other cases. You'll end up rounding up numbers that should have been rounded down.
Here's a related article:
http://www.theregister.co.uk/2006/08/12/floating_point_approximation/
I'm taking at guess at what you really mean to do. I suspect you're trying to see if a string contains a decimal representation of a double to some precision. Perhaps it's an arithmetic quiz program and you're trying to see if the user's response is "close enough" to the real answer. If that's the case, then it may be simpler to convert the string to a double and see if the absolute value of the difference between the two doubles is within some tolerance.
double string_to_double(const std::string &s)
{
std::stringstream buffer(s);
double d = 0.0;
buffer >> d;
return d;
}
bool match(const std::string &guess, double answer, int precision)
{
const static double thresh[] = { 0.5, 0.05, 0.005, 0.0005, /* etc. */ };
const double g = string_to_double(guess);
const double delta = g - answer;
return -thresh[precision] < delta && delta <= thresh[precision];
}
Another possibility is to round the answer first (while it's still numeric) BEFORE converting it to a string.
bool match2(const std::string &guess, double answer, int precision)
{
const static double thresh[] = {0.5, 0.05, 0.005, 0.0005, /* etc. */ };
const double rounded = answer + thresh[precision];
std::stringstream buffer;
buffer << std::setprecision(precision) << rounded;
return guess == buffer.str();
}
Both of these solutions should be faster than your sample code, but I'm not sure if they do what you really want.
As far as i see you are checking if a rounded on p points is equal b.
Insted of changing a to string, make other way and change string to double
- (just multiplications and addion or only additoins using small table)
- then substract both numbers and check if substraction is in proper range (if p==1 => abs(p-a) < 0.05)
Old time developers trick from the dark ages of Pounds, Shilling and pence in the old country.
The trick was to store the value as a whole number fo half-pennys. (Or whatever your smallest unit is). Then all your subsequent arithmatic is straightforward integer arithimatic and rounding etc will take care of itself.
So in your case you store your data in units of 200ths of whatever you are counting,
do simple integer calculations on these values and divide by 200 into a float varaible whenever you want to display the result.
I beleive Boost does a "BigDecimal" library these days, but, your requirement for run time speed would probably exclude this otherwise excellent solution.