I'm using this C++ fixed point library:
https://github.com/MikeLankamp/fpm
My program multiplies two values together and outputs them, first using fixed point, second using primitive floats:
#include <fpm/fixed.hpp>
#include <fpm/ios.hpp>
#include <iostream>
int main()
{
fpm::fixed_16_16 x{2.2};
fpm::fixed_16_16 y{ 2 };
std::cout << (x * y) << std::endl;
std::cout << (2.2 * 2) << std::endl;
}
The fixed-point library outputs a less-accurate result than just multiplying two floats:
4.3999
4.4
What am I doing wrong?
Related
I am trying this:
std::cout << boost::lexical_cast<std::string>(0.0009) << std::endl;
and expecting the output to be:
0.0009
But the output is:
0.00089999999999999998
g++ version: 5.4.0, Boost version: 1.66
What can I do to make it print what it's been given.
You can in fact override the default precision:
Live On Coliru
#include <boost/lexical_cast.hpp>
#ifdef BOOST_LCAST_NO_COMPILE_TIME_PRECISION
# error unsupported
#endif
template <> struct boost::detail::lcast_precision<double> : std::integral_constant<unsigned, 5> { };
#include <string>
#include <iostream>
int main() {
std::cout << boost::lexical_cast<std::string>(0.0009) << std::endl;
}
Prints
0.0009
However, this is both not supported (detail::) and not flexible (all doubles will come out this way now).
The Real Problem
The problem is loss of accuracy converting from the decimal representation to the binary representation. Instead, use a decimal float representation:
Live On Coliru
#include <boost/lexical_cast.hpp>
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <string>
#include <iostream>
using Double = boost::multiprecision::cpp_dec_float_50;
int main() {
Double x("0.009"),
y = x*2,
z = x/77;
for (Double v : { x, y, z }) {
std::cout << boost::lexical_cast<std::string>(v) << "\n";
std::cout << v << "\n";
}
}
Prints
0.009
0.009
0.018
0.018
0.000116883
0.000116883
boost::lexical_cast doesn't allow you to specify the precision when converting a floating point number into its string representation. From the documentation
For more involved conversions, such as where precision or formatting need tighter control than is offered by the default behavior of lexical_cast, the conventional std::stringstream approach is recommended.
So you could use stringstream
double d = 0.0009;
std::ostringstream ss;
ss << std::setprecision(4) << d;
std::cout << ss.str() << '\n';
Or another option is to use the boost::format library.
std::string s = (boost::format("%1$.4f") % d).str();
std::cout << s << '\n';
Both will print 0.0009.
0.0009 is a double precision floating literal with, assuming IEEE754, the value
0.00089999999999999997536692664112933925935067236423492431640625
That's what boost::lexical_cast<std::string> sees as the function parameter. And the default precision setting in the cout formatter is rounding to the 17th significant figure:
0.00089999999999999998
Really, if you want exact decimal precision, then use a decimal type (Boost has one), or work in integers and splice in the decimal separator yourself. But in your case, given that you're simply outputting the number with no complex calculations, rounding to the 15th significant figure will have the desired effect: inject
std::setprecision(15)
into the output stream.
I am getting an issue when trying to output my float using std::cout <<
I have the following values:
vector2f = {-32.00234098f, 96.129380f} //takes 2 floats (x, y)
output: -32.0023:96.1294
What I am looking for is:
output: -32.00234098:96.129380
The actual numbers could be vary from the 7 decimal places (.0000007) to 3 decimal places (.003) so setting a fixed rounding number does not work in this case.
Any help would be great as I have tried changed to doubles as well but to no avail.
Thanks in advance!
There are 2 problems.
you need to include <iomanip> and use the std::setprecision manipulator.
To get the level of accuracy you want you will need to use doubles rather than floats.
e.g.:
#include <iostream>
#include <iomanip>
int main()
{
auto x = -32.00234098f, y = 96.129380f;
std::cout << std::setprecision(8) << std::fixed << x << ":" << y << std::endl;
// doubles
auto a = -32.00234098, b = 96.129380;
std::cout << std::setprecision(8) << std::fixed << a << ":" << b << std::endl;
}
example output:
-32.00234222:96.12937927
-32.00234098:96.12938000
You can set the output precision of the stream using std::precision manipulator.
To print trailing zeroes up to the given precision like in your example output, you need to use std::fixed manipulator.
There is this:
https://codeyarns.com/2016/02/16/how-to-compare-eigen-matrices-for-equality/
But there is no isApprox for tensors.
The following doesn't do what I want:
#include <Eigen/Core>
#include <unsupported/Eigen/CXX11/Tensor>
#include <array>
#include <iostream>
using namespace Eigen;
using namespace std;
int main()
{
// Create 2 matrices using tensors of rank 2
Eigen::Tensor<int, 2> a(2, 3);
Eigen::Tensor<int, 2>* b = &a;
cerr<<(*b==*b)<<endl;
}
because it does coordinate wise comparison and returns a tensor of the same dimension instead of a true/false vale.
How do I check if two tensors are identical? No isApprox for tensors.
I could write my own function, but I want to be able to use GPU power when available, and it seems like Eigen has built-in GPU support.
For an exact comparison of 2 tensors A and B, you can use the comparison operator followed by a boolean reduction:
Tensor<bool, 0> eq = (A==B).all();
This will return a tensor of rank 0 (i.e. a scalar) that contains a boolean value that's true iff each coefficient of A is equal to the corresponding coefficient of B.
There is no approximate comparison at the moment, although it wouldn't be difficult to add.
You can always use a couple of Eigen::Maps to do the isApprox checks.
#include <iostream>
#include <unsupported/Eigen/CXX11/Tensor>
using namespace Eigen;
int main()
{
Tensor<double, 3> t(2, 3, 4);
Tensor<double, 3> r(2, 3, 4);
t.setConstant(2.1);
r.setConstant(2.1);
t(1, 2, 3) = 2.2;
std::cout << "Size: " << r.size() << "\n";
std::cout << "t: " << t << "\n";
std::cout << "r: " << r << "\n";
Map<VectorXd> mt(t.data(), t.size());
Map<VectorXd> mr(r.data(), r.size());
std::cout << "Default isApprox: " << mt.isApprox(mr) << "\n";
std::cout << "Coarse isApprox: " << mt.isApprox(mr, 0.11) << "\n";
return 0;
}
P.S./N.B. Regarding Eigen's built in GPU support... Last I checked it is fairly limited and with good reason. It is/was limited to fixed size matrices as dynamic allocation on a GPU is really something you want to avoid like the common cold (if not like the plague). I take it back. It looks like the Tensor module supports GPUs pretty well.
I wanted to do calculation among large integers and double, for example,
1245.....889 * 3.14
I think we cannot construct a cpp_int from 3.14 because of
http://www.boost.org/doc/libs/1_56_0/libs/multiprecision/doc/html/boost_multiprecision/tut/conversions.html
Also I am not sure if I can use cpp_dec_float because cpp_dec_float needs to specify the number of significant bits which cannot be arbitrarily large.
Does it mean I should use cpp_rational? But I have to convert 3.14 in a rational number first like?
how can I extract the mantissa of a double
Do we have any better way to represent double like 3.14 and large int together?
Thank you,
Your question seems amply confused, but here goes:
You can use the gmp_float with dynamic precision by specifying 0 for the precision:
Live On Coliru
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <boost/multiprecision/cpp_int.hpp>
#include <boost/multiprecision/gmp.hpp>
#include <iostream>
int main() {
using Int = boost::multiprecision::cpp_int;
using Float = boost::multiprecision::number<boost::multiprecision::gmp_float<0>>;
Float fake_pi;
boost::multiprecision::default_ops::calc_pi(fake_pi.backend(), 2000);
Int value("12345678901234567890123456789012345678901234567890");
std::cout << std::fixed << value << " * " << fake_pi << " = " << Float(value.convert_to<Float>() * fake_pi);
}
Prints
12345678901234567890123456789012345678901234567890 * 3.141593 = 38785094139697029053093797030280437291228399875653.959648
Below is some simple code I am working with:
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
float f = 1.66f;
int d = (int)f;
double g = (double)d;
cout.precision(6);
cout<<g<<"\n";
}
I want it to print 1.000000 but it prints only 1. But, even after upgradation of int to double, doesn't it automatically convert it to an integer value?
You can add cout << std::fixed;
#include <iostream>
#include <iomanip>
using namespace std;
int main() {
float f = 1.66f;
int d = (int)f;
double g = (double)d;
cout.precision(6);
cout << std::fixed;
cout<<g<<"\n";
}
and you get 1.000000
Explanations (edit)
When you use std::fixed :
When floatfield is set to fixed, floating-point values are written
using fixed-point notation: the value is represented with exactly as
many digits in the decimal part as specified by the precision field
(precision) and with no exponent part.
When you use the std::defaultfloat (the one you are using) :
When floatfield is set to defaultfloat, floating-point values are
written using the default notation: the representation uses as many
meaningful digits as needed up to the stream's decimal precision
(precision), counting both the digits before and after the decimal
point (if any).
That's why the following .000000 are considered irrevelant !
(If you had 1.00001 it would have been printed)
Setprecision sets how precise the result has to, e.g.
std::cout << (1.f)/6 << std::endl; // prints 0.166667
std::cout.precision(7);
std::cout << (1.f)/6 << std::endl; // prints 0.1666667
But it does not require that 0's are printed out, consider:
std::cout.precision(5);
std::cout << 1.1110f << std::endl; // prints 1.111
std::cout << 1.1111f << std::endl; // prints 1.1111
And as coincoin suggests the solution to get 0's printed out is to use std::fixed!