C++ Builder ignores std::fixed for big numbers - c++

Consider the following program, which is intended to print a floating point number to three decimal places:
#include <iostream>
#include <string>
#include <sstream>
#include <iomanip>
int main() {
double val = 1.234567890e50;
std::stringstream ss;
ss << std::fixed << std::setprecision(3);
ss << val;
std::cout << ss.str() << std::endl;
return 0;
}
This number cannot be represented exactly as a double, but that is irrelevant now.
On GCC 5.1, the program prints
123456789000000004671007453916432257001527036608512.000
On Embarcadero C++ Builder 10.1 (compiler bcc32c version 3.3.1), the output is:
1.234567890000000047000000000000000000000e+50
Why does the C++ Builder output not match the selected floating point notation, which is std::fixed? Even if the number is 10^300, GCC shows it using the selected notation.
Why do these two compilers work differently? Does the C++ standard define how the string conversion should work in this case?

Embarcadero C++ Builder 10.1 has a bug.
std::setprecision(3); sets the number of digits to display after the decimal separator to exactly 3, irrespective of whether or not the floating point scheme on that platform can represent that number.
GCC5.1 is compliant with this.
Embarcadero C++ Builder 10.1 is not.
See http://en.cppreference.com/w/cpp/io/manip/setprecision, which pretty much proxies the C++ standard.

Related

c++ How to print in a file a double decimal number with comma(instead of dot)

I need to print a csv file with numbers.
When the file is printed , I have numbers with dots, but I need them with commas.
Here an example.
If I print this number in terminal using locale method, I obtain a number with comma, but in the file I have the same number but with dot. I do not understand why.
How could I do?
#include <iostream>
#include <locale>
#include <string> // std::string, std::to_string
#include <fstream>
using namespace std;
int main()
{
double x = 2.87;
std::setlocale(LC_NUMERIC, "de_DE");
std::cout.imbue(std::locale(""));
std::cout << x << std::endl;
ofstream outputfile ("out.csv");
if (outputfile.is_open())
{
outputfile <<to_string(x)<<"\n\n";
}
return 0;
}
Thanks in advance.
Locales are system-specific. You probably just made a typo; try "de-DE", which will probably work (at least it does on my Windows).
However, if your program is not inherently German-centric, then abusing the German locale just for the side effect of getting a specific decimal point character is bad programming style, I think.
Here is an alternative solution using std::numpunct::do_decimal_point:
#include <string>
#include <fstream>
#include <locale>
struct Comma final : std::numpunct<char>
{
char do_decimal_point() const override { return ','; }
};
int main()
{
std::ofstream os("out.csv");
os.imbue(std::locale(std::locale::classic(), new Comma));
double d = 2.87;
os << d << '\n'; // prints 2,87 into the file
}
This code specifically states that it just wants the standard C++ formatting with only the decimal point character replaced with ','. It makes no reference to specific countries or languages, or system-dependent properties.
Your issue is that std::to_string() uses the C locale libraries. It appears that "de_DE" is not a valid locale on your machine (or Coliru for that matter), leading to the default C locale being used and using .. The solution is to use "de_DE.UTF-8". As an aside, using "" for std::locale will not always produce commas; instead, it will depend on the locale set for your machine.

Printf of long double, unknown conversion type character L

my program is pretty simple:
#include <iostream>
#include <stdio.h>
using namespace std;
int main()
{
long double a = 4.5;
printf("%Lg", a);
return 0;
}
When compiled, there is one warning:
warning: unknown conversion type character 'L' in format [-Wformat=]|
The output in the console is
-1.28823e-231
The documentation is pretty clear about printing long doubles, it simply states that the correct parameter for this format is L. What am I doing wrong? I'm using codeblocks, mingw32-g++ compiler under Windows 10.
P.S.: cout produces the same output.
You have a compiler problem:
mingw uses the Microsoft C run-time libraries and their implementation of printf does not support the 'long double' type. As a work-around, you could cast to 'double' and pass that to printf instead.
Therefore, you need double double:
On the x86 architecture, most C compilers implement long double as the 80-bit extended precision type supported by x86 hardware (sometimes stored as 12 or 16 bytes to maintain data structure alignment), as specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)). An exception is Microsoft Visual C++ for x86, which makes long double a synonym for double.[2] The Intel C++ compiler on Microsoft Windows supports extended precision, but requires the /Qlong‑double switch for long double to correspond to the hardware's extended precision format.[3]
Instead of printf(), use std::cout coupled with std::scientific, for example:
#include <iostream>
std::cout << "scientific: " << std::endl << std::scientific << a;
It is best not to use both stdio.h and iostream in the same project, as they can sometimes interfere with each other.
PS. On the subject, also available are: std::hex, std::dec (decimal), std::boolalpha (true, false) and more.

Is this simple C++ program using <locale> correct?

This code seemed to work ok in (ubuntu trusty) versions of gcc and clang, and in Win 7 on a VM via mingw... Recently I upgraded to Wily and builds made with clang crash consistently here.
#include <iostream>
#include <locale>
#include <string>
int main() {
std::cout << "The locale is '" << std::locale("").name() << "'" << std::endl;
}
Sometimes its a gibberish string followed by Aborted: Core dumped and sometimes its invalid free.
$ ./a.out
The locale is 'en_US.UTF-8QX�у�X�у����0�����P�����\�(��\�(��\�(��h��t�������������y���������ț�ԛ�������en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_UP����`�������������������������p�����������#��������������`�������������p��������������������#��#��#��`��������p������������0��P��p���qp��!en_US.UTF-8QЈ[�����\�(��\�(��\�(�����������#�� �����P�����0�����P�����\�(��\�(��\�(��Ȣ�Ԣ����������������(��4��#��L��en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8!�v[��������������#�� �����P�����0�����P�����\�(��\�(���(��h��t��������������������Ȥ�Ԥ�������en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8!��[�� ����[�������7����7��.,!!x�[��!��[��!�[��#�����������#�� �����P�����0�����P�����\�(��\�(��\�(��(��4��#��L��X��d��p��|������������n_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8ѻAborted (core dumped)
$ ./a.out
The locale is 'en_US.UTF-8QX\%�QX\%�Q�G�0H��H�PI��I�\:|�Q\D|�Q\>|�QhK�tK��K��K��K��K��Q�K��K��K��K��K��K�en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8en_US.UTF-8ѻ
*** Error in `./a.out': free(): invalid pointer: 0x0000000000b04a98 ***
Aborted (core dumped)
(Both program outputs above were abbreviated greatly or they would not fit in this question.)
I also got an invalid free on Coliru with it as well.
But this is very similar to example code on cppreference:
#include <iostream>
#include <locale>
#include <string>
int main()
{
std::wcout << "User-preferred locale setting is " << std::locale("").name().c_str() << '\n';
// on startup, the global locale is the "C" locale
std::wcout << 1000.01 << '\n';
// replace the C++ global locale as well as the C locale with the user-preferred locale
std::locale::global(std::locale(""));
// use the new global locale for future wide character output
std::wcout.imbue(std::locale());
// output the same number again
std::wcout << 1000.01 << '\n';
}
Actually that code crashes Coliru also... :facepalm:
More crashes of similar code from Coliru.
Is this a bug in the c++ library used by clang, or is this code defective?
Note also: These crashes seem to be restricted to the C++ api, if you use <clocale> instead things seem to work okay, so it may just be some trivial problem in the C++ bindings over this?
Variations using setlocale: 1 2 3
Looks like this is caused by libstdc++'s ABI change in its basic_string, which was needed for C++11 conformance. To manage this transition, GCC added the abi_tag attribute, which changes the mangled name of functions so that functions for the new and old ABI can be distinguished, even if the change wouldn't otherwise affect the mangled name (e.g. the return type of a function).
This code
#include <locale>
#include <string>
int main() {
std::locale().name();
}
on GCC emits a call to _ZNKSt6locale4nameB5cxx11Ev, which demangles to std::locale::name[abi:cxx11]() const, and returns a SSO string with the new ABI.
Clang, on other other hand, doesn't support the abi_tag attribute, and emits a call to _ZNKSt6locale4nameEv, which demangles to simply std::locale::name() const - which is the version returning a COW string (the old ABI).
The net result is that the program ends up trying to use a COW string as an SSO string when compiled with Clang. Havoc ensues.
The obvious workaround is to force the old ABI via -D_GLIBCXX_USE_CXX11_ABI=0.
I think the "" parameter might be corrupting something. I don't think it's a legal argument?
To verify it's nothing else, try running this:
#include <iostream>
#include <locale>
int main() {
std::locale("").name();
}
It compiles and runs just fine with GCC:
g++ -Wall -pedantic locale.cpp
<= No errorrs, no warnings
./a.out
The locale is 'en_US.UTF-8'
<= Expected output
ADDENDUM:
Exactly the same with MSVS 2013 - no errors or warnings compiling; no errors running:
locale.cpp =>
#include <iostream>
#include <locale>
#include <string>
int main() {
std::cout << "The locale is '" << std::locale("").name() << "'" << std::endl;
}
Output =>
locale
The locale is 'English_United States.1252'

Right aligning money_put results

I'm trying to get the C++ library to generate properly formatted USD output ($ sign, commas for every 1000s place etc).
I'm close, but I cannot get the right alignment to work:
#include <iostream>
#include <iomanip>
#include <locale>
using namespace std;
int main() {
double fiftyMil = 50000000.0; // 50 million bucks
locale myloc;
const money_put<char>& mpUS = use_facet<money_put<char> >(myloc);
cout.imbue(myloc);
cout << showbase << fixed;
cout << "A";
cout.width(30);
cout.setf(std::ios::right);
mpUS.put(cout, false, cout, ' ', fiftyMil * 100); // convert to cents
cout << "B" << endl;
return 0;
}
I'm getting:
A$50,000,000.00 B
I want to get:
A $50,000,000.00B
Any ideas why this isn't working?
I'm using the latest Solaris compiler (12.4)
Update:
It seems like the issue is with the C++ libraries included with the Solaris compiler. This is the workaround I used:
#include <iostream>
#include <iomanip>
#include <locale>
#include <sstream>
using namespace std;
string getFormattedCcy(double amt) {
ostringstream os;
static locale myloc;
static const money_put<char>& mpUS = use_facet<money_put<char> >(myloc);
os.imbue(myloc);
os << showbase << fixed;
mpUS.put(os, false, os, ' ', amt * 100);
return os.str();
}
int main() {
double fiftyMil = 50000000.0; // 50 million bucks
cout << "A";
cout.setf(std::ios::right);
cout.width(30);
cout << getFormattedCcy(fiftyMil);
cout << "B" << endl;
return 0;
}
You have a couple of problems--one with your code, another that looks like its in your implementation.
The problem in your code is pretty trivial. Since you're using a default-constructed locale, it should be using the "C" locale, which shouldn't write out the $ or thousands separators.
That part is easy to fix. Change: locale myloc; to: locale myloc(""); to get a localized locale (so to speak).
I doubt that'll fix the justification problem you're seeing though. That looks to me like it's a problem with the standard library you're using. When I run your code (with the correction above) I get what I'd expect:
A $50,000,000.00B
That's with Visual C++ though (and despite a compiler that conforms fairly poorly, its standard library is about as good as they come).
Also note that right justification is the default, so the line:
cout.setf(std::ios::right);
...should have no effect (but I suspect you knew that, and added it in the hope of getting it to work when it didn't otherwise).
As far as how to get things to work with the Sun Oracle compiler, the most obvious suggestion would probably be to switch standard libraries to one that works better. That leads to another question: whether to try to get a standard library to work with the compiler you're using, or switch to a different compiler such as CLang or gcc. From what I understand, 12.4 was a pretty serious improvement in terms of C++ conformance, but I don't think either the compiler or (apparently) the standard library is really competitive with gcc or Clang yet. OTOH, you may not have a choice, in which case essentially your only route is to build a different standard library with your existing compiler, and hope for the best. If you can't even do that...you could try setting the locale correctly, and just writing the number with std::cout << fiftyMil;, and hope it at least gives you commas as it should, then add the currency sign separately.
As an aside, if you do get an updated (C++11 or newer) library, you can use put_money to simplify the code quite a bit:
#include <iostream>
#include <iomanip>
#include <locale>
using namespace std;
int main() {
double fiftyMil = 50000000.0; // 50 million bucks
std::locale myloc("");
cout.imbue(myloc);
cout << "A" << showbase << setw(30) << put_money(fiftyMil * 100) << "B";
}

Printing pi with large precision using GMP

Right now I have this little code that I want to print pi to x decimals:
#include <iostream>
#include <gmpxx.h>
#include <math.h>
using namespace std;
int main()
{
mpf_set_default_prec(1000);
mpf_t pi;
mpf_init(pi);
mpf_set_d(pi, atan(1)*4);
cout << pi << endl;
}
I just sat default_prec to 1000 as I thought that would give me plenty of decimals, but no matter what I set it to, I only get 5. How can I print more?
The problem is atan(1)*4 will only be evaluated with single (or double) precision, since this part of the code has nothing to do with gmp and will use the standard c++ types. The program will evaluate atan(1)*4 first and then convert the result into an mpf_t.
On the GMP homepage there is a page on how to calculate Pi with many digits.