modf returns 1 as the fractional: - c++

I have this static method, it receives a double and "cuts" its fractional tail leaving two digits after the dot. works almost all the time. I have noticed that when
it receives 2.3 it turns it to 2.29. This does not happen for 0.3, 1.3, 3.3, 4.3 and 102.3.
Code basically multiplies the number by 100 uses modf divides the integer value by 100 and returns it.
Here the code catches this one specific number and prints out:
static double dRound(double number) {
bool c = false;
if (number == 2.3)
c = true;
int factor = pow(10, 2);
number *= factor;
if (c) {
cout << " number *= factor : " << number << endl;
//number = 230;// When this is not marked as comment the code works well.
}
double returnVal;
if (c){
cout << " fractional : " << modf(number, &returnVal) << endl;
cout << " integer : " <<returnVal << endl;
}
modf(number, &returnVal);
return returnVal / factor;
}
it prints out:
number *= factor : 230
fractional : 1
integer : 229
Does anybody know why this is happening and how can i fix this?
Thank you, and have a great weekend.

Remember floating point number cannot represent decimal numbers exactly. 2.3 * 100 actually gives 229.99999999999997. Thus modf returns 229 and 0.9999999999999716.
However, cout's format will only display floating point numbers to 6 decimal places by default. So the 0.9999999999999716 is shown as 1.
You could use (roughly) the upper error limit that a value represents in floating point to avoid the 2.3 error:
#include <cmath>
#include <limits>
static double dRound(double d) {
double inf = copysign(std::numeric_limits<double>::infinity(), d);
double theNumberAfter = nextafter(d, inf);
double epsilon = theNumberAfter - d;
int factor = 100;
d *= factor;
epsilon *= factor/2;
d += epsilon;
double returnVal;
modf(number, &returnVal);
return returnVal / factor;
}
Result: http://www.ideone.com/ywmua

Here is a way without rounding:
double double_cut(double d)
{
long long x = d * 100;
return x/100.0;
}
Even if you want rounding according to 3rd digit after decimal point, here is a solution:
double double_cut_round(double d)
{
long long x = d * 1000;
if (x > 0)
x += 5;
else
x -= 5;
return x / 1000.0;
}

Related

My program for calculating pi using Chudnovsky in C++ precision problem

My code:
#include <iostream>
#include <iomanip>
#include <cmath>
long double fac(long double num) {
long double result = 1.0;
for (long double i=2.0; i<num; i++)
result *= i;
return result;
}
int main() {
using namespace std;
long double pi=0.0;
for (long double k = 0.0; k < 10.0; k++) {
pi += (pow(-1.0,k) * fac(6.0 * k) * (13591409.0 + (545140134.0 * k)))
/ (fac(3.0 * k) * pow(fac(k), 3.0) * pow(640320.0, 3.0 * k + 3.0/2.0));
}
pi *= 12.0;
cout << setprecision(100) << 1.0 / pi << endl;
return 0;
}
My output:
3.1415926535897637228433865175247774459421634674072265625
The problem with this output is that it outputed 56 digits instead of 100; How do I fix that?
First of all your factorial is wrong the loop should be for (long double i=2.0; i<=num; i++) instead of i<num !!!
As mentioned in the comments double can hold only up to ~16 digits so your 100 digits is not doable by this method. To remedy this there are 2 ways:
use high precision datatype
there are libs for this, or you can implement it on your own you need just few basic operations. Note that to represent 100 digits you need at least
ceil(100 digits/log10(2)) = 333 bits
of mantisa or fixed point integer while double has only 53
53*log10(2) = 15.954589770191003346328161420398 digits
use different method of computation of PI
For arbitrary precision I recommend to use BPP However if you want just 100 digits you can use simple taylor seriesbased like this on strings (no need for any high precision datatype nor FPU):
//The following 160 character C program, written by Dik T. Winter at CWI, computes pi to 800 decimal digits.
int a=10000,b=0,c=2800,d=0,e=0,f[2801],g=0;main(){for(;b-c;)f[b++]=a/5;
for(;d=0,g=c*2;c-=14,printf("%.4d",e+d/a),e=d%a)for(b=c;d+=f[b]*a,f[b]=d%--g,d/=g--,--b;d*=b);}
Aside the obvious precision limits Your implementation is really bad from both performance and precision aspects that is why you lost precision way sooner as you hitting double precision limits in very low iterations of k. If you rewrite the iterations so the subresults are as small as can be (in terms of bits of mantisa) and not use too much unnecessary computations here few hints:
why are you computing the same factorials again and again
You have k! in loop where k is incrementing why not just multiply the k to some variable holding actual factorial instead? for example:
//for ( k=0;k<10;k++){ ... fac(k) ... }
for (f=1,k=0;k<10;k++){ if (k) f*=k; ... f ... }
why are you divide by factorials again and again
if you think a bit about it then if (a>b) you can compute this instead:
a! / b! = (1*2*3*4*...*b*...*a) / (1*2*3*4*...*b)
a! / b! = (b+1)*(b+2)*...*(a)
I would not use pow at all for this
pow is "very complex" function causing further precision and performance losses for example pow(-1.0,k) can be done like this:
//for ( k=0;k<10;k++){ ... pow(-1.0,k) ... }
for (s=+1,k=0;k<10;k++){ s=-s; ... s ... }
Also pow(640320.0, 3.0 * k + 3.0/2.0)) can be computed in the same way as factorial, pow(fac(k), 3.0) you can 3 times multipply the variable holding fac(k) instead ...
the therm pow(640320.0, 3.0 * k + 3.0/2.0) outgrows even (6k)!
so you can divide it by it to keep subresults smaller...
These few simple tweaks will enhance the precision a lot as you will overflow the double precision much much latter as the subresults will be much smaller then the naive ones as factorials tend to grow really fast
Putting all together leads to this:
double pi_Chudnovsky() // no pow,fac lower subresult
{ // https://en.wikipedia.org/wiki/Chudnovsky_algorithm
double pi,s,f,f3,k,k3,k6,p,dp,q,r;
for (pi=0.0,s=1.0,f=f3=1,k=k3=k6=0.0,p=640320.0,dp=p*p*p,p*=sqrt(p),r=13591409.0;k<27.0;k++,s=-s)
{
if (k) // f=k!, f3=(3k)!, p=pow(640320.0,3k+1.5)*(3k)!/(6k)!, r=13591409.0+(545140134.0*k)
{
p*=dp; r+=545140134.0;
f*=k; k3++; f3*=k3; k6++; p/=k6; p*=k3;
k3++; f3*=k3; k6++; p/=k6; p*=k3;
k3++; f3*=k3; k6++; p/=k6; p*=k3;
k6++; p/=k6;
k6++; p/=k6;
k6++; p/=k6;
}
q=s*r; q/=f; q/=f; q/=f; q/=p; pi+=q;
}
return 1.0/(pi*12.0);
}
as you can see k goes up to 27, while your naive method can go only up to 18 on 64 bit doubles before overflow. However the result is the same as the double mantissa is saturated after 2 iterations ...
I am feeling happy due to following code :)
/*
I have compiled using cygwin
change "iostream...using namespace std" OR iostream.h based on your compiler at related OS.
*/
#include <iostream>
#include <iomanip>
#include <cmath>
using namespace std;
long double fac(long double num)
{
long double result = 1.0;
for (long double i=2.0; num > i; ++i)
{
result *= i;
}
return result;
}
int main()
{
long double pi=0.0;
for (long double k = 0.0; 10.0 > k; ++k)
{
pi += (pow(-1.0,k) * fac(6.0 * k) * (13591409.0 + (545140134.0 * k)))
/ (fac(3.0 * k) * pow(fac(k), 3.0) * pow(640320.0, 3.0 * k + 3.0/2.0));
}
pi *= 12.0;
cout << "BEFORE USING setprecision VALUE OF DEFAULT PRECISION " << cout.precision() << "\n";
cout << setprecision(100) << 1.0 / pi << endl;
cout << "AFTER USING setprecision VALUE OF CURRENT PRECISION WITHOUT USING fixed " << cout.precision() << "\n";
cout << fixed;
cout << "AFTER USING setprecision VALUE OF CURRENT PRECISION USING fixed " << cout.precision() << "\n";
cout << "USING fixed PREVENT THE EARTH'S ROUNDING OFF INSIDE OUR UNIVERSE :)\n";
cout << setprecision(100) << 1.0 / pi << endl;
return 0;
}
/*
$ # Sample output:
$ g++ 73256565.cpp -o ./a.out;./a.out
$ ./a.out
BEFORE USING setprecision VALUE OF DEFAULT PRECISION 6
3.14159265358976372457810999350158454035408794879913330078125
AFTER USING setprecision VALUE OF CURRENT PRECISION WITHOUT USING fixed 100
AFTER USING setprecision VALUE OF CURRENT PRECISION USING fixed 100
USING fixed PREVENT THE EARTH'S ROUNDING OFF INSIDE OUR UNIVERSE :)
3.1415926535897637245781099935015845403540879487991333007812500000000000000000000000000000000000000000
*/

Calculating bit_Delta(double p1, double p2) in C++

I am interested in computing the function int bit_Delta(double p1, double p2) for two doubles in the range [0,1). The function returns the index where the two doubles deviate in binary after the dot.
For example, 1/2 = 0.10 in binary, and 3/4=0.11 in binary. So bit_delta(0.5, 0.75) should return 2 because their first digit (1) is the same, but the second is the first digit where they differ.
I've thought about calculating the mantissa and exponent separately for each double. If the exponents are different, I think I can do it, but if the exponents are the same, I don't know how to use the mantissa. Any ideas?
One way would be to compare if both values are above 0.5 ==> both have the first bit set, else if both are below 0.5 ==> both have the first bit not set.
If both are above 0.5, subtract 0.5 and half the treshold, continue till you found the threshold, where the values are not both above or both below it.
#include <iostream>
int bit_delta(double a, double b)
{
if (a == b) return -1;
double treshold = 0.5;
for (int digit = 1; digit < 20; digit++, treshold /= 2)
{
if (a < treshold && b < treshold)
{
}
else if (a >= treshold && b >= treshold)
{
a -= treshold;
b -= treshold;
}
else
return digit;
}
return 20; //compare more than 20 digits does not make sense for a double
}
int main()
{
std::cout << bit_delta(0.25, 0.75) << std::endl;
std::cout << bit_delta(0.5, 0.75) << std::endl;
std::cout << bit_delta(0.7632, 0.751) << std::endl;
}
This code returns 1 2 7.
The following idea is based on conversion of the double values to fixed-point arithmetic, comparing the integers with XOR and counting the equal most significant bits.
#include <bit>
int bit_delta(double p1, double p2)
{
unsigned int i1 = static_cast<unsigned int>(p1 * 0x80000000U);
unsigned int i2 = static_cast<unsigned int>(p2 * 0x80000000U);
return std::countl_zero(i1 ^ i2);
}
It returns results between 1 .. 32.
With positive inputs p1 and p2 below 1. the MSB of i1 and i2 would always be zero, which is needed to get the counting right.
By using unsigned long long int instead of unsigned int you could increase the precision to 53 (i.e. the precision of double numbers).
The function countl_zero - included with the bit header - was introduced in C++20.

how to improve the precision of computing float numbers?

I write a code snippet in Microsoft Visual Studio Community 2019 in C++ like this:
int m = 11;
int p = 3;
float step = 1.0 / (m - 2 * p);
the variable step is 0.200003, 0.2 is what i wanted. Is there any suggestion to improve the precision?
This problem comes from UNIFORM KNOT VECTOR. Knot vector is a concept in NURBS. You can think it is just an array of numbers like this: U[] = {0, 0.2, 0.4, 0.6, 0.8, 1.0}; The span between two adjacent numbers is a constant. The size of knot vector can be changed accroding to some condition, but the range is in [0, 1].
the whole function is:
typedef float NURBS_FLOAT;
void CreateKnotVector(int m, int p, bool clamped, NURBS_FLOAT* U)
{
if (clamped)
{
for (int i = 0; i <= p; i++)
{
U[i] = 0;
}
NURBS_FLOAT step = 1.0 / (m - 2 * p);
for (int i = p+1; i < m-p; i++)
{
U[i] = U[i - 1] + step;
}
for (int i = m-p; i <= m; i++)
{
U[i] = 1;
}
}
else
{
U[0] = 0;
NURBS_FLOAT step = 1.0 / m;
for (int i = 1; i <= m; i++)
{
U[i] = U[i - 1] + step;
}
}
}
Let's follow what's going on in your code:
The expression 1.0 / (m - 2 * p) yields 0.2, to which the closest representable double value is 0.200000000000000011102230246251565404236316680908203125. Notice how precise it is – to 16 significant decimal digits. It's because, due to 1.0 being a double literal, the denominator is promoted to double, and the whole calculation is done in double precision, thus yielding a double value.
The value obtained in the previous step is written to step, which has type float. So the value has to be rounded to the closest representable value, which happens to be 0.20000000298023223876953125.
So your cited result of 0.200003 is not what you should get. Instead, it should be closer to 0.200000003.
Is there any suggestion to improve the precision?
Yes. Store the value in a higher-precision variable. E.g., instead of float step, use double step. In this case the value you've calculated won't be rounded once more, so precision will be higher.
Can you get the exact 0.2 value to work with it in the subsequent calculations? With binary floating-point arithmetic, unfortunately, no. In binary, the number 0.2 is a periodic fraction:
0.210 = 0.0̅0̅1̅1̅2 = 0.0011 0011 0011...2
See Is floating point math broken? question and its answers for more details.
If you really need decimal calculations, you should use a library solution, e.g. Boost's cpp_dec_float. Or, if you need arbitrary-precision calculations, you can use e.g. cpp_bin_float from the same library. Note that both variants will be orders of magnitude slower than using bulit-in C++ binary floating-point types.
When dealing with floating point math a certain amount of rounding errors are expected.
For starters, values like 0.2 aren't exactly represented by a float, or even a double:
std::cout << std::setprecision(60) << 0.2 << '\n';
// ^^^ It outputs something like: 0.200000000000000011102230246251565404236316680908203125
Besides, the errors may accumulate when a sequence of operations are performed on imprecise values. Some operations, like summation and subctraction, are more sensitive to this kind of errors than others, so it'd be better to avoid them if possible.
That seems to be the case, here, where we can rewrite OP's function into something like the following
#include <iostream>
#include <iomanip>
#include <vector>
#include <algorithm>
#include <cassert>
#include <type_traits>
template <typename T = double>
auto make_knots(int m, int p = 0) // <- Note that I've changed the signature.
{
static_assert(std::is_floating_point_v<T>);
std::vector<T> knots(m + 1);
int range = m - 2 * p;
assert(range > 0);
for (int i = 1; i < m - p; i++)
{
knots[i + p] = T(i) / range; // <- Less prone to accumulate rounding errors
}
std::fill(knots.begin() + m - p, knots.end(), 1.0);
return knots;
}
template <typename T>
void verify(std::vector<T> const& v)
{
bool sum_is_one = true;
for (int i = 0, j = v.size() - 1; i <= j; ++i, --j)
{
if (v[i] + v[j] != 1.0) // <- That's a bold request for a floating point type
{
sum_is_one = false;
break;
}
}
std::cout << (sum_is_one ? "\n" : "Rounding errors.\n");
}
int main()
{
// For presentation purposes only
std::cout << std::setprecision(60) << 0.2 << '\n';
std::cout << std::setprecision(60) << 0.4 << '\n';
std::cout << std::setprecision(60) << 0.6 << '\n';
std::cout << std::setprecision(60) << 0.8 << "\n\n";
auto k1 = make_knots(11, 3);
for (auto i : k1)
{
std::cout << std::setprecision(60) << i << '\n';
}
verify(k1);
auto k2 = make_knots<float>(10);
for (auto i : k2)
{
std::cout << std::setprecision(60) << i << '\n';
}
verify(k2);
}
Testable here.
One solution to avoid drift (which I guess is your worry?) is to manually use rational numbers, for example in this case you might have:
// your input values for determining step
int m = 11;
int p = 3;
// pre-calculate any intermediate values, which won't have rounding issues
int divider = (m - 2 * p); // could be float or double instead of int
// input
int stepnumber = 1234; // could also be float or double instead of int
// output
float stepped_value = stepnumber * 1.0f / divider;
In other words, formulate your problem so that step of your original code is always 1 (or whatever rational number you can represent exactly using 2 integers) internally, so there is no rounding issue. If you need to display the value for user, then you can do it just for display: 1.0 / divider and round to suitable number of digits.

Counting iterations of the Leibniz summation for π in C++

My task is to ask the user to how many decimal places of accuracy they want the summation to iterate compared to the actual value of pi. So 2 decimal places would stop when the loop reaches 3.14. I have a complete program, but I am unsure if it actually works as intended. I have checked for 0 and 1 decimal places with a calculator and they seem to work, but I don't want to assume it works for all of them. Also my code may be a little clumsy since were are still learning the basics. We only just learned loops and nested loops. If there are any obvious mistakes or parts that could be cleaned up, I would appreciate any input.
Edit: I only needed to have this work for up to five decimal places. That is why my value of pi was not precise. Sorry for the misunderstanding.
#include <iostream>
#include <cmath>
using namespace std;
int main() {
const double PI = 3.141592;
int n, sign = 1;
double sum = 0,test,m;
cout << "This program determines how many iterations of the infinite series for\n"
"pi is needed to get with 'n' decimal places of the true value of pi.\n"
"How many decimal places of accuracy should there be?" << endl;
cin >> n;
double p = PI * pow(10.0, n);
p = static_cast<double>(static_cast<int>(p) / pow(10, n));
int counter = 0;
bool stop = false;
for (double i = 1;!stop;i = i+2) {
sum = sum + (1.0/ i) * sign;
sign = -sign;
counter++;
test = (4 * sum) * pow(10.0,n);
test = static_cast<double>(static_cast<int>(test) / pow(10, n));
if (test == p)
stop = true;
}
cout << "The series was iterated " << counter<< " times and reached the value of pi\nwithin "<< n << " decimal places." << endl;
return 0;
}
One of the problems of the Leibniz summation is that it has an extremely low convergence rate, as it exhibits sublinear convergence. In your program you also compare a calculated extimation of π with a given value (a 6 digits approximation), while the point of the summation should be to find out the right figures.
You can slightly modify your code to make it terminate the calculation if the wanted digit doesn't change between iterations (I also added a max number of iterations check). Remember that you are using doubles not unlimited precision numbers and sooner or later rounding errors will affect the calculation. As a matter of fact, the real limitation of this code is the number of iterations it takes (2,428,700,925 to obtain 3.141592653).
#include <iostream>
#include <cmath>
#include <iomanip>
using std::cout;
// this will take a long long time...
const unsigned long long int MAX_ITER = 100000000000;
int main() {
int n;
cout << "This program determines how many iterations of the infinite series for\n"
"pi is needed to get with 'n' decimal places of the true value of pi.\n"
"How many decimal places of accuracy should there be?\n";
std::cin >> n;
// precalculate some values
double factor = pow(10.0,n);
double inv_factor = 1.0 / factor;
double quad_factor = 4.0 * factor;
long long int test = 0, old_test = 0, sign = 1;
unsigned long long int count = 0;
double sum = 0;
for ( long long int i = 1; count < MAX_ITER; i += 2 ) {
sum += 1.0 / (i * sign);
sign = -sign;
old_test = test;
test = static_cast<long long int>(sum * quad_factor);
++count;
// perform the test on integer values
if ( test == old_test ) {
cout << "Reached the value of Pi within "<< n << " decimal places.\n";
break;
}
}
double pi_leibniz = static_cast<double>(inv_factor * test);
cout << "Pi = " << std::setprecision(n+1) << pi_leibniz << '\n';
cout << "The series was iterated " << count << " times\n";
return 0;
}
I have summarized the results of several runs in this table:
digits Pi iterations
---------------------------------------
0 3 8
1 3.1 26
2 3.14 628
3 3.141 2,455
4 3.1415 136,121
5 3.14159 376,848
6 3.141592 2,886,751
7 3.1415926 21,547,007
8 3.14159265 278,609,764
9 3.141592653 2,428,700,925
10 3.1415926535 87,312,058,383
Your program will never terminate, because test==p will never be true. This is a comparison between two double-precision numbers that are calculated differently. Due to round-off errors, they will not be identical, even if you run an infinite number of iterations, and your math is correct (and right now it isn't, because the value of PI in your program is not accurate).
To help you figure out what's going on, print the value of test in each iteration, as well as the distance between test and pi, as follows:
#include<iostream>
using namespace std;
void main() {
double pi = atan(1.0) * 4; // Make sure you have a precise value of PI
double sign = 1.0, sum = 0.0;
for (int i = 1; i < 1000; i += 2) {
sum = sum + (1.0 / i) * sign;
sign = -sign;
double test = 4 * sum;
cout << test << " " << fabs(test - pi) << "\n";
}
}
After you make sure the program works well, change the stopping condition eventually to be based on the distance between test and pi.
for (int i=1; fabs(test-pi)>epsilon; i+=2)

Rounding double values in C++ like MS Excel does it

I've searched all over the net, but I could not find a solution to my problem. I simply want a function that rounds double values like MS Excel does. Here is my code:
#include <iostream>
#include "math.h"
using namespace std;
double Round(double value, int precision) {
return floor(((value * pow(10.0, precision)) + 0.5)) / pow(10.0, precision);
}
int main(int argc, char *argv[]) {
/* The way MS Excel does it:
1.27815 1.27840 -> 1.27828
1.27813 1.27840 -> 1.27827
1.27819 1.27843 -> 1.27831
1.27999 1.28024 -> 1.28012
1.27839 1.27866 -> 1.27853
*/
cout << Round((1.27815 + 1.27840)/2, 5) << "\n"; // *
cout << Round((1.27813 + 1.27840)/2, 5) << "\n";
cout << Round((1.27819 + 1.27843)/2, 5) << "\n";
cout << Round((1.27999 + 1.28024)/2, 5) << "\n"; // *
cout << Round((1.27839 + 1.27866)/2, 5) << "\n"; // *
if(Round((1.27815 + 1.27840)/2, 5) == 1.27828) {
cout << "Hurray...\n";
}
system("PAUSE");
return EXIT_SUCCESS;
}
I have found the function here at stackoverflow, the answer states that it works like the built-in excel rounding routine, but it does not. Could you tell me what I'm missing?
In a sense what you are asking for is not possible:
Floating point values on most common platforms do not have a notion of a "number of decimal places". Numbers like 2.3 or 8.71 simply cannot be represented precisely. Therefore, it makes no sense to ask for any function that will return a floating point value with a given number of non-zero decimal places -- such numbers simply do not exist.
The only thing you can do with floating point types is to compute the nearest representable approximation, and then print the result with the desired precision, which will give you the textual form of the number that you desire. To compute the representation, you can do this:
double round(double x, int n)
{
int e;
double d;
std::frexp(x, &e);
if (e >= 0) return x; // number is an integer, nothing to do
double const f = std::pow(10.0, n);
std::modf(x * f, &d); // d == integral part of 10^n * x
return d / f;
}
(You can also use modf instead of frexp to determine whether x is already an integer. You should also check that n is non-negative, or otherwise define semantics for negative "precision".)
Alternatively to using floating point types, you could perform fixed point arithmetic. That is, you store everything as integers, but you treat them as units of, say, 1/1000. Then you could print such a number as follows:
std::cout << n / 1000 << "." << n % 1000;
Addition works as expected, though you have to write your own multiplication function.
To compare double values, you must specify a range of comparison, where the result could be considered "safe". You could use a macro for that.
Here is one example of what you could use:
#define COMPARE( A, B, PRECISION ) ( ( A >= B - PRECISION ) && ( A <= B + PRECISION ) )
int main()
{
double a = 12.34567;
bool equal = COMPARE( a, 12.34567F, 0.0002 );
equal = COMPARE( a, 15.34567F, 0.0002 );
return 0;
}
Thank you all for your answers! After considering the possible solutions I changed the original Round() function in my code to adding 0.6 instead of 0.5 to the value.
The value "127827.5" (I do understand that this is not an exact representation!) becomes "127828.1" and finally through floor() and dividing it becomes "1.27828" (or something more like 1.2782800..001). Using COMPARE suggested by Renan Greinert with a correctly chosen precision I can safely compare the values now.
Here is the final version:
#include <iostream>
#include "math.h"
#define COMPARE(A, B, PRECISION) ((A >= B-PRECISION) && (A <= B+PRECISION))
using namespace std;
double Round(double value, int precision) {
return floor(value * pow(10.0, precision) + 0.6) / pow(10.0, precision);
}
int main(int argc, char *argv[]) {
/* The way MS Excel does it:
1.27815 1.27840 // 1.27828
1.27813 1.27840 -> 1.27827
1.27819 1.27843 -> 1.27831
1.27999 1.28024 -> 1.28012
1.27839 1.27866 -> 1.27853
*/
cout << Round((1.27815 + 1.27840)/2, 5) << "\n";
cout << Round((1.27813 + 1.27840)/2, 5) << "\n";
cout << Round((1.27819 + 1.27843)/2, 5) << "\n";
cout << Round((1.27999 + 1.28024)/2, 5) << "\n";
cout << Round((1.27839 + 1.27866)/2, 5) << "\n";
//Comparing the rounded value against a fixed one
if(COMPARE(Round((1.27815 + 1.27840)/2, 5), 1.27828, 0.000001)) {
cout << "Hurray!\n";
}
//Comparing two rounded values
if(COMPARE(Round((1.27815 + 1.27840)/2, 5), Round((1.27814 + 1.27841)/2, 5), 0.000001)) {
cout << "Hurray!\n";
}
system("PAUSE");
return EXIT_SUCCESS;
}
I've tested it by rounding a hundred double values and than comparing the results to what Excel gives. They were all the same.
I'm afraid the answer is that Round cannot perform magic.
Since 1.27828 is not exactly representable as a double, you cannot compare some double with 1.27828 and hope it will match.
You need to do the maths without the decimal part, to get that numbers... so something like this.
double dPow = pow(10.0, 5.0);
double a = 1.27815;
double b = 1.27840;
double a2 = 1.27815 * dPow;
double b2 = 1.27840 * dPow;
double c = (a2 + b2) / 2 + 0.5;
Using your function...
double c = (Round(a) + Round(b)) / 2 + 0.5;