Why does std::boolalpha ignore field width when using clang? - c++

The following code gives different results with the g++ 7 compiler and Apple clang++. Did I run into a bug in clang in the alignment of bool output when std::boolalpha is used, or did I make a mistake?
#include <string>
#include <sstream>
#include <iostream>
#include <iomanip>
template<typename T>
void print_item(std::string name, T& item) {
std::ostringstream ss;
ss << std::left << std::setw(30) << name << "= "
<< std::right << std::setw(11) << std::setprecision(5)
<< std::boolalpha << item << " (blabla)" << std::endl;
std::cout << ss.str();
}
int main() {
int i = 34;
std::string s = "Hello!";
double d = 2.;
bool b = true;
print_item("i", i);
print_item("s", s);
print_item("d", d);
print_item("b", b);
return 0;
}
The difference is:
// output from g++ version 7.2
i = 34 (blabla)
s = Hello! (blabla)
d = 2 (blabla)
b = true (blabla)
// output from Apple clang++ 8.0.0
i = 34 (blabla)
s = Hello! (blabla)
d = 2 (blabla)
b = true (blabla)

At T.C. mentioned, this is LWG 2703:
No provision for fill-padding when boolalpha is set
N4582 subclause 25.4.2.2.2 [facet.num.put.virtuals] paragraph 6 makes
no provision for fill-padding in its specification of the behaviour
when (str.flags() & ios_base::boolalpha) != 0.
As a result I do not see a solution to this with Clang. However, notice that:
libc++ implements this exactly.
libstdc++ and MSVC apply padding and alignment.
PS: LWG stands for Library Working Group.

Related

Setprecision in a function is also applying in another function. I can't seem to know why [duplicate]

I want to control the precision for a double during a comparison, and then come back to default precision, with C++.
I intend to use setPrecision() to set precision. What is then syntax, if any, to set precision back to default?
I am doing something like this
std::setPrecision(math.log10(m_FTOL));
I do some stuff, and I would like to come back to default double comparison right afterwards.
I modified like this, and I still have some errors
std::streamsize prec = std::ios_base::precision();
std::setprecision(cmath::log10(m_FTOL));
with cmath false at compilation, and std::ios_base also false at compilation. Could you help?
You can get the precision before you change it, with std::ios_base::precision and then use that to change it back later.
You can see this in action with:
#include <ios>
#include <iostream>
#include <iomanip>
int main (void) {
double d = 3.141592653589;
std::streamsize ss = std::cout.precision();
std::cout << "Initial precision = " << ss << '\n';
std::cout << "Value = " << d << '\n';
std::cout.precision (10);
std::cout << "Longer value = " << d << '\n';
std::cout.precision (ss);
std::cout << "Original value = " << d << '\n';
std::cout << "Longer and original value = "
<< std::setprecision(10) << d << ' '
<< std::setprecision(ss) << d << '\n';
std::cout << "Original value = " << d << '\n';
return 0;
}
which outputs:
Initial precision = 6
Value = 3.14159
Longer value = 3.141592654
Original value = 3.14159
Longer and original value = 3.141592654 3.14159
Original value = 3.14159
The code above shows two ways of setting the precision, first by calling std::cout.precision (N) and second by using a stream manipulator std::setprecision(N).
But you need to keep in mind that the precision is for outputting values via streams, it does not directly affect comparisons of the values themselves with code like:
if (val1== val2) ...
In other words, even though the output may be 3.14159, the value itself is still the full 3.141592653590 (subject to normal floating point limitations, of course).
If you want to do that, you'll need to check if it's close enough rather than equal, with code such as:
if ((fabs (val1 - val2) < 0.0001) ...
Use C++20 std::format and {:.2} instead of std::setprecision
Finally, this will be the superior choice once you can use it:
#include <format>
#include <string>
int main() {
std::cout << std::format("{:.3} {:.4}\n", 3.1415, 3.1415);
}
Expected output:
3.14 3.145
This will therefore completely overcome the madness of modifying std::cout state.
The existing fmt library implements it for before it gets official support: https://github.com/fmtlib/fmt Install on Ubuntu 22.04:
sudo apt install libfmt-dev
Modify source to replace:
<format> with <fmt/core.h>
std::format to fmt::format
main.cpp
#include <iostream>
#include <fmt/core.h>
int main() {
std::cout << fmt::format("{:.3} {:.4}\n", 3.1415, 3.1415);
}
and compile and run with:
g++ -std=c++11 -o main.out main.cpp -lfmt
./main.out
Output:
3.14 3.142
See also:
How do I print a double value with full precision using cout?
std::string formatting like sprintf
Pre C++20/fmt::: Save the entire state with std::ios::copyfmt
You might also want to restore the entire previous state with std::ios::copyfmt in these situations, as explained at: Restore the state of std::cout after manipulating it
main.cpp
#include <iomanip>
#include <iostream>
int main() {
constexpr float pi = 3.14159265359;
constexpr float e = 2.71828182846;
// Sanity check default print.
std::cout << "default" << std::endl;
std::cout << pi << std::endl;
std::cout << e << std::endl;
std::cout << std::endl;
// Change precision format to scientific,
// and restore default afterwards.
std::cout << "modified" << std::endl;
std::ios cout_state(nullptr);
cout_state.copyfmt(std::cout);
std::cout << std::setprecision(2);
std::cout << std::scientific;
std::cout << pi << std::endl;
std::cout << e << std::endl;
std::cout.copyfmt(cout_state);
std::cout << std::endl;
// Check that cout state was restored.
std::cout << "restored" << std::endl;
std::cout << pi << std::endl;
std::cout << e << std::endl;
std::cout << std::endl;
}
GitHub upstream.
Compile and run:
g++ -ggdb3 -O0 -std=c++11 -Wall -Wextra -pedantic -o main.out main.cpp
./main.out
Output:
default
3.14159
2.71828
modified
3.14e+00
2.72e+00
restored
3.14159
2.71828
Tested on Ubuntu 19.04, GCC 8.3.0.
You need to keep track of your current precison and then reset back to the same once done with your operations with required modified precison. For this you can use std::ios_base::precision:
streamsize precision ( ) const;
streamsize precision ( streamsize prec );
The first syntax returns the value of the current floating-point precision field for the stream.
The second syntax also sets it to a new value.
setprecision() can be used only for output operations and cannot be used for comparisons
To compare floats say a and b , you have to do it explicitly like this:
if( abs(a-b) < 1e-6) {
}
else {
}
You can use cout << setprecision(-1)

Is this a bug in std::gcd?

I've come across this behavior of std::gcd that I found unexpected:
#include <iostream>
#include <numeric>
int main()
{
int a = -120;
unsigned b = 10;
//both a and b are representable in type C
using C = std::common_type<decltype(a), decltype(b)>::type;
C ca = std::abs(a);
C cb = b;
std::cout << a << ' ' << ca << '\n';
std::cout << b << ' ' << cb << '\n';
//first one should equal second one, but doesn't
std::cout << std::gcd(a, b) << std::endl;
std::cout << std::gcd(std::abs(a), b) << std::endl;
}
Run on compiler explorer
According to cppreference both calls to std::gcd should yield 10, as all preconditions are satisfied.
In particular, it is only required that the absolute values of both operands are representable in their common type:
If either |m| or |n| is not representable as a value of type std::common_type_t<M, N>, the behavior is undefined.
Yet the first call returns 2.
Am I missing something here?
Both gcc and clang behave this way.
Looks like a bug in libstc++. If you add -stdlib=libc++ to the CE command line, you'll get:
-120 120
10 10
10
10

boost rational cast to double

Using the following bit of code compiled against boost 1.62:
#include <boost/rational.hpp>
#include <iostream>
int main() {
auto val = boost::rational<int64_t>(499999, 2);
std::cout << val << std::endl;
std::cout << boost::rational_cast<double>(val) << std::endl;
}
I get the following output:
499999/2
250000
I would expect rational_cast to output 249999.5
Can anyone explain what I am doing wrong?
Modify the default formatting for floating-point input/output:
std::cout << std::fixed << boost::rational_cast<double>(v) << std::endl; add std::fixed to it.

Print out every bit of variable like 0 or 1 in byte blocks [duplicate]

I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;

Set back default floating point print precision in C++

I want to control the precision for a double during a comparison, and then come back to default precision, with C++.
I intend to use setPrecision() to set precision. What is then syntax, if any, to set precision back to default?
I am doing something like this
std::setPrecision(math.log10(m_FTOL));
I do some stuff, and I would like to come back to default double comparison right afterwards.
I modified like this, and I still have some errors
std::streamsize prec = std::ios_base::precision();
std::setprecision(cmath::log10(m_FTOL));
with cmath false at compilation, and std::ios_base also false at compilation. Could you help?
You can get the precision before you change it, with std::ios_base::precision and then use that to change it back later.
You can see this in action with:
#include <ios>
#include <iostream>
#include <iomanip>
int main (void) {
double d = 3.141592653589;
std::streamsize ss = std::cout.precision();
std::cout << "Initial precision = " << ss << '\n';
std::cout << "Value = " << d << '\n';
std::cout.precision (10);
std::cout << "Longer value = " << d << '\n';
std::cout.precision (ss);
std::cout << "Original value = " << d << '\n';
std::cout << "Longer and original value = "
<< std::setprecision(10) << d << ' '
<< std::setprecision(ss) << d << '\n';
std::cout << "Original value = " << d << '\n';
return 0;
}
which outputs:
Initial precision = 6
Value = 3.14159
Longer value = 3.141592654
Original value = 3.14159
Longer and original value = 3.141592654 3.14159
Original value = 3.14159
The code above shows two ways of setting the precision, first by calling std::cout.precision (N) and second by using a stream manipulator std::setprecision(N).
But you need to keep in mind that the precision is for outputting values via streams, it does not directly affect comparisons of the values themselves with code like:
if (val1== val2) ...
In other words, even though the output may be 3.14159, the value itself is still the full 3.141592653590 (subject to normal floating point limitations, of course).
If you want to do that, you'll need to check if it's close enough rather than equal, with code such as:
if ((fabs (val1 - val2) < 0.0001) ...
Use C++20 std::format and {:.2} instead of std::setprecision
Finally, this will be the superior choice once you can use it:
#include <format>
#include <string>
int main() {
std::cout << std::format("{:.3} {:.4}\n", 3.1415, 3.1415);
}
Expected output:
3.14 3.145
This will therefore completely overcome the madness of modifying std::cout state.
The existing fmt library implements it for before it gets official support: https://github.com/fmtlib/fmt Install on Ubuntu 22.04:
sudo apt install libfmt-dev
Modify source to replace:
<format> with <fmt/core.h>
std::format to fmt::format
main.cpp
#include <iostream>
#include <fmt/core.h>
int main() {
std::cout << fmt::format("{:.3} {:.4}\n", 3.1415, 3.1415);
}
and compile and run with:
g++ -std=c++11 -o main.out main.cpp -lfmt
./main.out
Output:
3.14 3.142
See also:
How do I print a double value with full precision using cout?
std::string formatting like sprintf
Pre C++20/fmt::: Save the entire state with std::ios::copyfmt
You might also want to restore the entire previous state with std::ios::copyfmt in these situations, as explained at: Restore the state of std::cout after manipulating it
main.cpp
#include <iomanip>
#include <iostream>
int main() {
constexpr float pi = 3.14159265359;
constexpr float e = 2.71828182846;
// Sanity check default print.
std::cout << "default" << std::endl;
std::cout << pi << std::endl;
std::cout << e << std::endl;
std::cout << std::endl;
// Change precision format to scientific,
// and restore default afterwards.
std::cout << "modified" << std::endl;
std::ios cout_state(nullptr);
cout_state.copyfmt(std::cout);
std::cout << std::setprecision(2);
std::cout << std::scientific;
std::cout << pi << std::endl;
std::cout << e << std::endl;
std::cout.copyfmt(cout_state);
std::cout << std::endl;
// Check that cout state was restored.
std::cout << "restored" << std::endl;
std::cout << pi << std::endl;
std::cout << e << std::endl;
std::cout << std::endl;
}
GitHub upstream.
Compile and run:
g++ -ggdb3 -O0 -std=c++11 -Wall -Wextra -pedantic -o main.out main.cpp
./main.out
Output:
default
3.14159
2.71828
modified
3.14e+00
2.72e+00
restored
3.14159
2.71828
Tested on Ubuntu 19.04, GCC 8.3.0.
You need to keep track of your current precison and then reset back to the same once done with your operations with required modified precison. For this you can use std::ios_base::precision:
streamsize precision ( ) const;
streamsize precision ( streamsize prec );
The first syntax returns the value of the current floating-point precision field for the stream.
The second syntax also sets it to a new value.
setprecision() can be used only for output operations and cannot be used for comparisons
To compare floats say a and b , you have to do it explicitly like this:
if( abs(a-b) < 1e-6) {
}
else {
}
You can use cout << setprecision(-1)