How to convert huge numbers of boost multiprecision (cpp_bin_float) efficiently? - c++

I have an "extented double" data-type that stores the exponent in a seperate int variable to increase the range like this:
struct extDouble
{
double Value;
int Exponent;
}
I now need to convert a boost multiprecision float (cpp_bin_float) into this data-type. For values within the range of doubles this is no problem and can be done with
extDouble Number;
Number.Value = BigNumber.convert_to<double>();
Number.Exponent = 0;
(the loss of decimal places is no problem).
But how could I (efficiently) do this with values beyond the range of doubles? I know I could read the exponent directly like this:
Number.Exponent = BigNumber.backend().exponent();
But in order to get the fractional value, I would have to multiply BigNumber by pow(2, Exponent) which I want to avoid for performance reasons. Is there a way to read out the fractional value directly or simply set the exponent of BigNumber to 0? Or does anybody have another idea how to accomplish the whole thing as efficiently as possible (something like frexp or ldexp for boost multiprecision)?
Edit:
To clarify: if there was frexp for boost multifloats, I would simply do something like this:
Number.Value = frexp(BigNumber, &Number.Exponent);

I thought I was going to post a clever answer based in ilogb/scalbn¹, but then I thought to just check the premise:
extDouble toExt(F number) {
extDouble n;
n.Value = frexp(number, &n.Exponent).convert_to<double>();
return n;
}
Works just fine. frexp is found via ADL, and resolves to the multiprecision implementation.
Generalized a bit and with test cases:
Live On Coliru
#include <boost/multiprecision/cpp_bin_float.hpp>
using F = boost::multiprecision::cpp_bin_float_50;
struct extDouble {
double Value;
int Exponent;
friend auto& operator<<(std::ostream& os, extDouble const& ed) {
return os << "{Value:" << ed.Value << ", Exponent:" << ed.Exponent << "}";
}
};
template <typename F> extDouble toExt(F number) {
extDouble n;
n.Value = frexp(number, &n.Exponent).template convert_to<double>();
return n;
}
int main() {
using L = std::numeric_limits<double>;
std::cout << "max exponent: " << L::max_exponent << "\n";
std::cout << "mix exponent: " << L::min_exponent << "\n";
for (F f :
{
F("123.45"),
F("123.45") * pow(F(2), L::max_exponent - 5),
F("123.45") * pow(F(2), L::min_exponent + 3),
F("123.45") * pow(F(2), L::max_exponent * 2),
F("123.45") * pow(F(2), L::min_exponent * 2),
}) //
{
std::cout << f << " -> " << toExt(f) << std::endl;
}
}
Prints
max exponent: 1024
mix exponent: -1021
123.45 -> {Value:0.964453, Exponent:7}
6.93516e+308 -> {Value:0.964453, Exponent:1026}
4.39497e-305 -> {Value:0.964453, Exponent:-1011}
3.98953e+618 -> {Value:0.964453, Exponent:2055}
2.44478e-613 -> {Value:0.964453, Exponent:-2035}
¹ much like here https://en.cppreference.com/w/cpp/numeric/math/frexp

Related

Double precision issues when converting it to a large integer

Precision is the number of digits in a number. Scale is the number of
digits to the right of the decimal point in a number. For example, the
number 123.45 has a precision of 5 and a scale of 2.
I need to convert a double with a maximum scale of 7(i.e. it may have 7 digits after the decimal point) to a __int128. However, given a number, I don't know in advance, the actual scale the number has.
#include <iostream>
#include "json.hpp"
using json = nlohmann::json;
#include <string>
static std::ostream& operator<<(std::ostream& o, const __int128& x) {
if (x == std::numeric_limits<__int128>::min()) return o << "-170141183460469231731687303715884105728";
if (x < 0) return o << "-" << -x;
if (x < 10) return o << (char)(x + '0');
return o << x / 10 << (char)(x % 10 + '0');
}
int main()
{
std::string str = R"({"time": [0.143]})";
std::cout << "input: " << str << std::endl;
json j = json::parse(str);
std::cout << "output: " << j.dump(4) << std::endl;
double d = j["time"][0].get<double>();
__int128_t d_128_bad = d * 10000000;
__int128_t d_128_good = __int128(d * 1000) * 10000;
std::cout << std::setprecision(16) << std::defaultfloat << d << std::endl;
std::cout << "d_128_bad: " << d_128_bad << std::endl;
std::cout << "d_128_good: " << d_128_good << std::endl;
}
Output:
input: {"time": [0.143]}
output: {
"time": [
0.143
]
}
0.143
d_128_bad: 1429999
d_128_good: 1430000
As you can see, the converted double is not the expected 1430000 instead it is 1429999. I know the reason is that a float point number can not be represented exactly. The problem can be solved if I know the number of digit after the decimal point.
For example,
I can instead use __int128_t(d * 1000) * 10000. However, I don't know the scale of a given number which might have a maximum of scale 7.
Question> Is there a possible solution for this? Also, I need to do this conversion very fast.
I'm not familiar with this library, but it does appear to have a mechanism to get a json object's string representation (dump()). I would suggest you parse that into your value rather than going through the double intermediate representation, as in that case you will know the scale of the value as it was written.

std::setprecision sets the number of significant figures. How do I use iomanip to set the precision?

I have always found iomanip confusing and counter intuitive. I need help.
A quick internet search finds (https://www.vedantu.com/maths/precision) "We thus consider precision as the maximum number of significant digits after the decimal point in a decimal number" (the emphasis is mine). That matches my understanding too. However I wrote a test program and:
stm << std::setprecision(3) << 5.12345678;
std::cout << "5.12345678: " << stm.str() << std::endl;
stm.str("");
stm << std::setprecision(3) << 25.12345678;
std::cout << "25.12345678: " << stm.str() << std::endl;
stm.str("");
stm << std::setprecision(3) << 5.1;
std::cout << "5.1: " << stm.str() << std::endl;
stm.str("");
outputs:
5.12345678: 5.12
25.12345678: 25.1
5.1: 5.1
If the precision is 3 then the output should be:
5.12345678: 5.123
25.12345678: 25.123
5.1: 5.1
Clearly the C++ standard has a different interpretation of the meaning of "precision" as relates to floating point numbers.
If I do:
stm.setf(std::ios::fixed, std::ios::floatfield);
then the first two values are formatted correctly, but the last comes out as 5.100.
How do I set the precision without padding?
You can try using this workaround:
decltype(std::setprecision(1)) setp(double number, int p) {
int e = static_cast<int>(std::abs(number));
e = e != 0? static_cast<int>(std::log10(e)) + 1 + p : p;
while(number != 0.0 && static_cast<int>(number*=10) == 0 && e > 1)
e--; // for numbers like 0.001: those zeros are not treated as digits by setprecision.
return std::setprecision(e);
}
And then:
auto v = 5.12345678;
stm << setp(v, 3) << v;
Another more verbose and elegant solution is to create a struct like this:
struct __setp {
double number;
bool fixed = false;
int prec;
};
std::ostream& operator<<(std::ostream& os, const __setp& obj)
{
if(obj.fixed)
os << std::fixed;
else os << std::defaultfloat;
os.precision(obj.prec);
os << obj.number; // comment this if you do not want to print immediately the number
return os;
}
__setp setp(double number, int p) {
__setp setter;
setter.number = number;
int e = static_cast<int>(std::abs(number));
e = e != 0? static_cast<int>(std::log10(e)) + 1 + p : p;
while(number != 0.0 && static_cast<int>(number*=10) == 0)
e--; // for numbers like 0.001: those zeros are not treated as digits by setprecision.
if(e <= 0) {
setter.fixed = true;
setter.prec = 1;
} else
setter.prec = e;
return setter;
}
Using it like this:
auto v = 5.12345678;
stm << setp(v, 3);
I don't think you can do it nicely. There are two candidate formats: defaultfloat and fixed. For the former, "precision" is the maximum number of digits, where both sides of the decimal separator count. For the latter "precision" is the exact number of digits after the decimal separator.
So your solution, I think, is to use fixed format and then manually clear trailing zeros:
#include <iostream>
#include <iomanip>
#include <sstream>
void print(const double number)
{
std::ostringstream stream;
stream << std::fixed << std::setprecision(3) << number;
auto string=stream.str();
while(string.back()=='0')
string.pop_back();
if(string.back()=='.') // in case number is integral; beware of localization issues
string.pop_back();
std::cout << string << "\n";
}
int main()
{
print(5.12345678);
print(25.12345678);
print(5.1);
}
The fixed format gives almost what you want except that it preserves trailing zeros. There is no built-in way to avoid that but you can easily remove those zeros manually. For example, in C++20 you can do the following using std::format:
std::string format_fixed(double d) {
auto s = fmt::format("{:.3f}", d);
auto end = s.find_last_not_of('0');
return end != std::string::npos ? std::string(s.c_str(), end + 1) : s;
}
std::cout << "5.12345678: " << format_fixed(5.12345678) << "\n";
std::cout << "25.12345678: " << format_fixed(25.12345678) << "\n";
std::cout << "5.1: " << format_fixed(5.1) << "\n";
Output:
5.12345678: 5.123
25.12345678: 25.123
5.1: 5.1
The same example with the {fmt} library, std::format is based on: godbolt.
Disclaimer: I'm the author of {fmt} and C++20 std::format.

How to display C++ Boost Library multi-precision big integers in binary form?

I'm new to Boost and trying to use its multi-precision library to multiply very large inputs:
mp::uint1024_t my_1024_bit_int1 = 0b00100101101000100010010...010101;
mp::uint1024_t my_1024_bit_int2 = 0b0010101001000101000010100000001001...01010111; // bigger in practice
mp::uint1024_t my_1024_bit_result = my_1024_bit_int2*my_1024_bit_int1;
I need to be able to save the result as a string in binary form. I have tried to access the number of "limbs" in the integer:
int limbs = my_1024_bit_result.backend.limbs();
and then iterate through each limb and use the bitset function to convert each limb to a binary string, but it did not work.
How else could I achieve this?
If you actually meant binary digits:
template <typename Integer>
std::string to_bin(Integer num) {
auto sign = num.sign();
num = abs(num);
std::string result;
while (num) {
result += "01"[int(num % 2)];
num /= 2;
}
result += sign<0? "b0-": "b0";
std::reverse(begin(result), end(result));
return result;
}
Note how it supports signed types as well
Live On Coliru
int main() {
mp::uint1024_t a=0b00100101101000100010010010101;
mp::uint1024_t b=0b001010100100010100001010000000100101010111; // bigger in practice
mp::uint1024_t c = a * b;
std::cout << a << " * " << b << " = " << c << "\n";
std::cout << "\n" << to_bin(a) << " * " << to_bin(b) << "\n = " << to_bin(c) << "\n";
}
Prints
78922901 * 726187641175 = 57312835311878048675
0b100101101000100010010010101 * 0b1010100100010100001010000000100101010111
= 0b110001101101100000000110000000111101001010111101001000101110100011
Serialization?
In case you meant "binary serialization", use serialization:
Writing boost::multiprecision data type to binary file

How to print the binary value of negative numbers? [duplicate]

I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;

How to produce formatting similar to .NET's '0.###%' in iostreams?

I would like to output a floating-point number as a percentage, with up to three decimal places.
I know that iostreams have three different ways of presenting floats:
"default", which displays using either the rules of fixed or scientific, depending on the number of significant digits desired as defined by setprecision;
fixed, which displays a fixed number of decimal places defined by setprecision; and
scientific, which displays a fixed number of decimal places but using scientific notation, i.e. mantissa + exponent of the radix.
These three modes can be seen in effect with this code:
#include <iostream>
#include <iomanip>
int main() {
double d = 0.00000095;
double e = 0.95;
std::cout << std::setprecision(3);
std::cout.unsetf(std::ios::floatfield);
std::cout << "d = " << (100. * d) << "%\n";
std::cout << "e = " << (100. * e) << "%\n";
std::cout << std::fixed;
std::cout << "d = " << (100. * d) << "%\n";
std::cout << "e = " << (100. * e) << "%\n";
std::cout << std::scientific;
std::cout << "d = " << (100. * d) << "%\n";
std::cout << "e = " << (100. * e) << "%\n";
}
// output:
// d = 9.5e-05%
// e = 95%
// d = 0.000%
// e = 95.000%
// d = 9.500e-05%
// e = 9.500e+01%
None of these options satisfies me.
I would like to avoid any scientific notation here as it makes the percentages really hard to read. I want to keep at most three decimal places, and it's ok if very small values show up as zero. However, I would also like to avoid trailing zeros in fractional places for cases like 0.95 above: I want that to display as in the second line, as "95%".
In .NET, I can achieve this with a custom format string like "0.###%", which gives me a number formatted as a percentage with at least one digit left of the decimal separator, and up to three digits right of the decimal separator, trailing zeros skipped: http://ideone.com/uV3nDi
Can I achieve this with iostreams, without writing my own formatting logic (e.g. special casing small numbers)?
I'm reasonably certain nothing built into iostreams supports this directly.
I think the cleanest way to handle it is to round the number before passing it to an iostream to be printed out:
#include <iostream>
#include <vector>
#include <cmath>
double rounded(double in, int places) {
double factor = std::pow(10, places);
return std::round(in * factor) / factor;
}
int main() {
std::vector<double> values{ 0.000000095123, 0.0095123, 0.95, 0.95123 };
for (auto i : values)
std::cout << "value = " << 100. * rounded(i, 5) << "%\n";
}
Due to the way it does rounding, this has a limitation on the magnitude of numbers it can work with. For percentages this probably isn't an issue, but if you were working with a number close to the largest that can be represented in the type in question (double in this case) the multiplication by pow(10, places) could/would overflow and produce bad results.
Though I can't be absolutely certain, it doesn't seem like this would be likely to cause an issue for the problem you seem to be trying to solve.
This solution is terrible.
I am serious. I don't like it. It's probably slow and the function has a stupid name. Maybe you can use it for test verification, though, because it's so dumb I guess you can easily see it pretty much has to work.
It also assumes decimal separator to be '.', which doesn't have to be the case. The proper point could be obtained by:
char point = std::use_facet< std::numpunct<char> >(std::cout.getloc()).decimal_point();
But that's still not solving the problem, because the characters used for digits could be different and in general this isn't something that should be written in such a way.
Here it is.
template<typename Floating>
std::string formatFloatingUpToN(unsigned n, Floating f) {
std::stringstream out;
out << std::setprecision(n) << std::fixed;
out << f;
std::string ret = out.str();
// if this clause holds, it's all zeroes
if (std::abs(f) < std::pow(0.1, n))
return ret;
while (true) {
if (ret.back() == '0') {
ret.pop_back();
continue;
} else if (ret.back() == '.') {
ret.pop_back();
break;
} else
break;
}
return ret;
}
And here it is in action.