I am generating a text file to be used as a FORTRAN input file. The FORTRAN program specifies that the values it reads must be in a format such that
1.0
must be printed as
0.1000000E+01
As of right now the closest I have gotten in using iostream is
1.000000E+00
with the code
cout << setprecision(6) << fixed << scientific << uppercase;
_set_output_format(_TWO_DIGIT_EXPONENT);
cout << 1.0 << endl;
Does anyone know the best way to get a leading zero as shown above, preferably using ostream instead of printf?
As I said, what you ask is non-standard, but you can achieve that with a trick:
#include <iostream>
#include <iomanip>
#include <cmath>
class Double {
public:
Double(double x): value(x) {}
const double value;
};
std::ostream & operator<< (std::ostream & stream, const Double & x) {
// So that the log does not scream
if (x.value == 0.) {
stream << 0.0;
return stream;
}
int exponent = floor(log10(std::abs(x.value)));
double base = x.value / pow(10, exponent);
// Transform here
base /= 10;
exponent += 1;
stream << base << 'E' << exponent; // Change the format as needed
return stream;
}
int main() {
// Use it like this
std::cout << std::setprecision(6) << std::fixed;
std::cout << Double(-2.203e-15) << std::endl;
return 0;
}
The Double wrapper is needed because you cannot redefine << for double.
I did not test that way of separating exponent and base against the odds of floating point, maybe you can come up with a better alternative, but you get the idea :)
C thought:
Not a great answer because C++ answer preferred.
char buf[20];
buf[0] = ' ';
double x = -1.234567;
sprintf(&buf[1], "% .6E", x*10);
if (buf[3] == '.') { // detect if x was INF or NAN
buf[0] = buf[1];
buf[1] = '0';
buf[3] = buf[2];
buf[2] = '.';
}
// Cope with leading potential space if needed
if (buf[0] == ' ') memmove(&buf[0], &buf[1], strlen(buf));
printf("%s\n", buf);
// -0.1234567E+00
Weakness: Trouble if decimal point is not '.' or x near INF.
Create a locale facet that prints NO decimal point, and imbue it.
cout << "0." << setprecision(6) << fixed << scientific << uppercase << number * 10;
Related
So I'm trying to learn more about C++ and I'm practicing by making a calculator class for the quadratic equation. This is the code for it down below.
#include "QuadraticEq.h"
string QuadraticEq::CalculateQuadEq(double a, double b, double c)
{
double sqrtVar = sqrt(pow(b, 2) - (4 * a * c));
double eqPlus = (-b + sqrtVar)/(2 * a);
double eqMinus = (-b - sqrtVar) / (2 * a);
return "Your answers are " + to_string(eqPlus) + " and " + to_string(eqMinus);
}
I'm trying to make it so that the double variables eqPlus and eqMinus have only two decimal points. I've seen people say to use setprecision() but I've only seen people use that function in cout statements and there are none in the class because I'm not printing a string out I'm returning one. So what would I do here? I remember way before learning about some setiosflags() method, is there anything I can do with that?
You can use stringstream instead of the usual std::cout with setprecision().
#include <iostream>
#include <string>
#include <sstream>
#include <iomanip>
std::string adjustDP(double value, int decimalPlaces) {
// change the number of decimal places in a number
std::stringstream result;
result << std::setprecision(decimalPlaces) << std::fixed << value;
return result.str();
}
int main() {
std::cout << adjustDP(2.25, 1) << std::endl; //2.2
std::cout << adjustDP(0.75, 1) << std::endl; //0.8
std::cout << adjustDP(2.25213, 2) << std::endl; //2.25
std::cout << adjustDP(2.25, 0) << std::endl; //2
}
However, as seen from the output, this approach introduces some rounding off errors when value cannot be represented exactly as a floating point binary number.
Precision is the number of digits in a number. Scale is the number of
digits to the right of the decimal point in a number. For example, the
number 123.45 has a precision of 5 and a scale of 2.
I need to convert a double with a maximum scale of 7(i.e. it may have 7 digits after the decimal point) to a __int128. However, given a number, I don't know in advance, the actual scale the number has.
#include <iostream>
#include "json.hpp"
using json = nlohmann::json;
#include <string>
static std::ostream& operator<<(std::ostream& o, const __int128& x) {
if (x == std::numeric_limits<__int128>::min()) return o << "-170141183460469231731687303715884105728";
if (x < 0) return o << "-" << -x;
if (x < 10) return o << (char)(x + '0');
return o << x / 10 << (char)(x % 10 + '0');
}
int main()
{
std::string str = R"({"time": [0.143]})";
std::cout << "input: " << str << std::endl;
json j = json::parse(str);
std::cout << "output: " << j.dump(4) << std::endl;
double d = j["time"][0].get<double>();
__int128_t d_128_bad = d * 10000000;
__int128_t d_128_good = __int128(d * 1000) * 10000;
std::cout << std::setprecision(16) << std::defaultfloat << d << std::endl;
std::cout << "d_128_bad: " << d_128_bad << std::endl;
std::cout << "d_128_good: " << d_128_good << std::endl;
}
Output:
input: {"time": [0.143]}
output: {
"time": [
0.143
]
}
0.143
d_128_bad: 1429999
d_128_good: 1430000
As you can see, the converted double is not the expected 1430000 instead it is 1429999. I know the reason is that a float point number can not be represented exactly. The problem can be solved if I know the number of digit after the decimal point.
For example,
I can instead use __int128_t(d * 1000) * 10000. However, I don't know the scale of a given number which might have a maximum of scale 7.
Question> Is there a possible solution for this? Also, I need to do this conversion very fast.
I'm not familiar with this library, but it does appear to have a mechanism to get a json object's string representation (dump()). I would suggest you parse that into your value rather than going through the double intermediate representation, as in that case you will know the scale of the value as it was written.
I have always found iomanip confusing and counter intuitive. I need help.
A quick internet search finds (https://www.vedantu.com/maths/precision) "We thus consider precision as the maximum number of significant digits after the decimal point in a decimal number" (the emphasis is mine). That matches my understanding too. However I wrote a test program and:
stm << std::setprecision(3) << 5.12345678;
std::cout << "5.12345678: " << stm.str() << std::endl;
stm.str("");
stm << std::setprecision(3) << 25.12345678;
std::cout << "25.12345678: " << stm.str() << std::endl;
stm.str("");
stm << std::setprecision(3) << 5.1;
std::cout << "5.1: " << stm.str() << std::endl;
stm.str("");
outputs:
5.12345678: 5.12
25.12345678: 25.1
5.1: 5.1
If the precision is 3 then the output should be:
5.12345678: 5.123
25.12345678: 25.123
5.1: 5.1
Clearly the C++ standard has a different interpretation of the meaning of "precision" as relates to floating point numbers.
If I do:
stm.setf(std::ios::fixed, std::ios::floatfield);
then the first two values are formatted correctly, but the last comes out as 5.100.
How do I set the precision without padding?
You can try using this workaround:
decltype(std::setprecision(1)) setp(double number, int p) {
int e = static_cast<int>(std::abs(number));
e = e != 0? static_cast<int>(std::log10(e)) + 1 + p : p;
while(number != 0.0 && static_cast<int>(number*=10) == 0 && e > 1)
e--; // for numbers like 0.001: those zeros are not treated as digits by setprecision.
return std::setprecision(e);
}
And then:
auto v = 5.12345678;
stm << setp(v, 3) << v;
Another more verbose and elegant solution is to create a struct like this:
struct __setp {
double number;
bool fixed = false;
int prec;
};
std::ostream& operator<<(std::ostream& os, const __setp& obj)
{
if(obj.fixed)
os << std::fixed;
else os << std::defaultfloat;
os.precision(obj.prec);
os << obj.number; // comment this if you do not want to print immediately the number
return os;
}
__setp setp(double number, int p) {
__setp setter;
setter.number = number;
int e = static_cast<int>(std::abs(number));
e = e != 0? static_cast<int>(std::log10(e)) + 1 + p : p;
while(number != 0.0 && static_cast<int>(number*=10) == 0)
e--; // for numbers like 0.001: those zeros are not treated as digits by setprecision.
if(e <= 0) {
setter.fixed = true;
setter.prec = 1;
} else
setter.prec = e;
return setter;
}
Using it like this:
auto v = 5.12345678;
stm << setp(v, 3);
I don't think you can do it nicely. There are two candidate formats: defaultfloat and fixed. For the former, "precision" is the maximum number of digits, where both sides of the decimal separator count. For the latter "precision" is the exact number of digits after the decimal separator.
So your solution, I think, is to use fixed format and then manually clear trailing zeros:
#include <iostream>
#include <iomanip>
#include <sstream>
void print(const double number)
{
std::ostringstream stream;
stream << std::fixed << std::setprecision(3) << number;
auto string=stream.str();
while(string.back()=='0')
string.pop_back();
if(string.back()=='.') // in case number is integral; beware of localization issues
string.pop_back();
std::cout << string << "\n";
}
int main()
{
print(5.12345678);
print(25.12345678);
print(5.1);
}
The fixed format gives almost what you want except that it preserves trailing zeros. There is no built-in way to avoid that but you can easily remove those zeros manually. For example, in C++20 you can do the following using std::format:
std::string format_fixed(double d) {
auto s = fmt::format("{:.3f}", d);
auto end = s.find_last_not_of('0');
return end != std::string::npos ? std::string(s.c_str(), end + 1) : s;
}
std::cout << "5.12345678: " << format_fixed(5.12345678) << "\n";
std::cout << "25.12345678: " << format_fixed(25.12345678) << "\n";
std::cout << "5.1: " << format_fixed(5.1) << "\n";
Output:
5.12345678: 5.123
25.12345678: 25.123
5.1: 5.1
The same example with the {fmt} library, std::format is based on: godbolt.
Disclaimer: I'm the author of {fmt} and C++20 std::format.
I'm following a college course about operating systems and we're learning how to convert from binary to hexadecimal, decimal to hexadecimal, etc. and today we just learned how signed/unsigned numbers are stored in memory using the two's complement (~number + 1).
We have a couple of exercises to do on paper and I would like to be able to verify my answers before submitting my work to the teacher. I wrote a C++ program for the first few exercises but now I'm stuck as to how I could verify my answer with the following problem:
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
and we need to show the binary representation in memory of a, b and c.
I've done it on paper and it gives me the following results (all the binary representations in memory of the numbers after the two's complement):
a = 00111010 (it's a char, so 1 byte)
b = 00001000 (it's a char, so 1 byte)
c = 11111110 11000101 (it's a short, so 2 bytes)
Is there a way to verify my answer? Is there a standard way in C++ to show the binary representation in memory of a number, or do I have to code each step myself (calculate the two's complement and then convert to binary)? I know the latter wouldn't take so long but I'm curious as to if there is a standard way to do so.
The easiest way is probably to create an std::bitset representing the value, then stream that to cout.
#include <bitset>
...
char a = -58;
std::bitset<8> x(a);
std::cout << x << '\n';
short c = -315;
std::bitset<16> y(c);
std::cout << y << '\n';
Use on-the-fly conversion to std::bitset. No temporary variables, no loops, no functions, no macros.
Live On Coliru
#include <iostream>
#include <bitset>
int main() {
int a = -58, b = a>>3, c = -315;
std::cout << "a = " << std::bitset<8>(a) << std::endl;
std::cout << "b = " << std::bitset<8>(b) << std::endl;
std::cout << "c = " << std::bitset<16>(c) << std::endl;
}
Prints:
a = 11000110
b = 11111000
c = 1111111011000101
In C++20 you can use std::format to do this:
unsigned char a = -58;
std::cout << std::format("{:b}", a);
Output:
11000110
On older systems you can use the {fmt} library, std::format is based on. {fmt} also provides the print function that makes this even easier and more efficient (godbolt):
unsigned char a = -58;
fmt::print("{:b}", a);
Disclaimer: I'm the author of {fmt} and C++20 std::format.
If you want to display the bit representation of any object, not just an integer, remember to reinterpret as a char array first, then you can print the contents of that array, as hex, or even as binary (via bitset):
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
void show_binrep(const T& a)
{
const char* beg = reinterpret_cast<const char*>(&a);
const char* end = beg + sizeof(a);
while(beg != end)
std::cout << std::bitset<CHAR_BIT>(*beg++) << ' ';
std::cout << '\n';
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
show_binrep(a);
show_binrep(b);
show_binrep(c);
float f = 3.14;
show_binrep(f);
}
Note that most common systems are little-endian, so the output of show_binrep(c) is not the 1111111 011000101 you expect, because that's not how it's stored in memory. If you're looking for value representation in binary, then a simple cout << bitset<16>(c) works.
Is there a standard way in C++ to show the binary representation in memory of a number [...]?
No. There's no std::bin, like std::hex or std::dec, but it's not hard to output a number binary yourself:
You output the left-most bit by masking all the others, left-shift, and repeat that for all the bits you have.
(The number of bits in a type is sizeof(T) * CHAR_BIT.)
Similar to what is already posted, just using bit-shift and mask to get the bit; usable for any type, being a template (only not sure if there is a standard way to get number of bits in 1 byte, I used 8 here).
#include<iostream>
#include <climits>
template<typename T>
void printBin(const T& t){
size_t nBytes=sizeof(T);
char* rawPtr((char*)(&t));
for(size_t byte=0; byte<nBytes; byte++){
for(size_t bit=0; bit<CHAR_BIT; bit++){
std::cout<<(((rawPtr[byte])>>bit)&1);
}
}
std::cout<<std::endl;
};
int main(void){
for(int i=0; i<50; i++){
std::cout<<i<<": ";
printBin(i);
}
}
Reusable function:
template<typename T>
static std::string toBinaryString(const T& x)
{
std::stringstream ss;
ss << std::bitset<sizeof(T) * 8>(x);
return ss.str();
}
Usage:
int main(){
uint16_t x=8;
std::cout << toBinaryString(x);
}
This works with all kind of integers.
#include <iostream>
#include <cmath> // in order to use pow() function
using namespace std;
string show_binary(unsigned int u, int num_of_bits);
int main()
{
cout << show_binary(128, 8) << endl; // should print 10000000
cout << show_binary(128, 5) << endl; // should print 00000
cout << show_binary(128, 10) << endl; // should print 0010000000
return 0;
}
string show_binary(unsigned int u, int num_of_bits)
{
string a = "";
int t = pow(2, num_of_bits); // t is the max number that can be represented
for(t; t>0; t = t/2) // t iterates through powers of 2
if(u >= t){ // check if u can be represented by current value of t
u -= t;
a += "1"; // if so, add a 1
}
else {
a += "0"; // if not, add a 0
}
return a ; // returns string
}
Using the std::bitset answers and convenience templates:
#include <iostream>
#include <bitset>
#include <climits>
template<typename T>
struct BinaryForm {
BinaryForm(const T& v) : _bs(v) {}
const std::bitset<sizeof(T)*CHAR_BIT> _bs;
};
template<typename T>
inline std::ostream& operator<<(std::ostream& os, const BinaryForm<T>& bf) {
return os << bf._bs;
}
Using it like this:
auto c = 'A';
std::cout << "c: " << c << " binary: " << BinaryForm{c} << std::endl;
unsigned x = 1234;
std::cout << "x: " << x << " binary: " << BinaryForm{x} << std::endl;
int64_t z { -1024 };
std::cout << "z: " << z << " binary: " << BinaryForm{z} << std::endl;
Generates output:
c: A binary: 01000001
x: 1234 binary: 00000000000000000000010011010010
z: -1024 binary: 1111111111111111111111111111111111111111111111111111110000000000
Using old C++ version, you can use this snippet :
template<typename T>
string toBinary(const T& t)
{
string s = "";
int n = sizeof(T)*8;
for(int i=n-1; i>=0; i--)
{
s += (t & (1 << i))?"1":"0";
}
return s;
}
int main()
{
char a, b;
short c;
a = -58;
c = -315;
b = a >> 3;
cout << "a = " << a << " => " << toBinary(a) << endl;
cout << "b = " << b << " => " << toBinary(b) << endl;
cout << "c = " << c << " => " << toBinary(c) << endl;
}
a = => 11000110
b = => 11111000
c = -315 => 1111111011000101
I have had this problem when playing competitive coding games online. Here is a solution that is quick to implement and is fairly intuitive. It also avoids outputting leading zeros or relying on <bitset>
std::string s;
do {
s = std::to_string(r & 1) + s;
} while ( r>>=1 );
std::cout << s;
You should note however that this solution will increase your runtime, so if you are competing for optimization or not competing at all you should use one of the other solutions on this page.
Here is the true way to get binary representation of a number:
unsigned int i = *(unsigned int*) &x;
Is this what you're looking for?
std::cout << std::hex << val << std::endl;
I would like to print a double value, into a string of no more than 8 characters. The printed number should have as many digits as possible, e.g.
5.259675
48920568
8.514e-6
-9.4e-12
I tried C++ iostreams, and printf-style, and neither respects the provided size in the way I would like it to:
cout << setw(8) << 1.0 / 17777.0 << endl;
printf( "%8g\n", 1.0 / 17777.0 );
gives:
5.62525e-005
5.62525e-005
I know I can specify a precision, but I would have to provide a very small precision here, in order to cover the worst case. Any ideas how to enforce an exact field width without sacrificing too much precision? I need this for printing matrices. Do I really have to come up with my own conversion function?
A similar question has been asked 5 years ago: Convert double to String with fixed width , without a satisfying answer. I sure hope there has been some progress in the meantime.
This seems not too difficult, actually, although you can't do it in a single function call. The number of character places used by the exponent is really quite easy to predict:
const char* format;
if (value > 0) {
if (value < 10e-100) format = "%.1e";
else if (value < 10e-10) format = "%.2e";
else if (value < 1e-5) format = "%.3e";
}
and so on.
Only, the C standard, where the behavior of printf is defined, insists on at least two digits for the exponent, so it wastes some there. See c++ how to get "one digit exponent" with printf
Incorporating those fixes is going to make the code fairly complex, although still not as bad as doing the conversion yourself.
If you want to convert to fixed decimal numbers (e.g. drop the +/-"E" part), then it makes it a lot easier to accomplish:
#include <stdio.h>
#include <cstring> // strcpy
#include <iostream> // std::cout, std::fixed
#include <iomanip> // std::setprecision
#include <new>
char *ToDecimal(double val, int maxChars)
{
std::ostringstream buffer;
buffer << std::fixed << std::setprecision(maxChars-2) << val;
std::string result = buffer.str();
size_t i = result.find_last_not_of('\0');
if (i > maxChars) i = maxChars;
if (result[i] != '.') ++i;
result.erase(i);
char *doubleStr = new char[result.length() + 1];
strcpy(doubleStr, (const char*)result.c_str());
return doubleStr;
}
int main()
{
std::cout << ToDecimal(1.26743237e+015, 8) << std::endl;
std::cout << ToDecimal(-1.0, 8) << std::endl;
std::cout << ToDecimal(3.40282347e+38, 8) << std::endl;
std::cout << ToDecimal(1.17549435e-38, 8) << std::endl;
std::cout << ToDecimal(-1E4, 8) << std::endl;
std::cout << ToDecimal(12.78e-2, 8) << std::endl;
}
Output:
12674323
-1
34028234
0.000000
-10000
0.127800