I'm trying to understand the strange behavior of the following program. Obviously, an overflow occurs during the definition of the global variable "bug", but the program throws a floating point exception during the innocent calculation 1.0+2.0.
#include <iostream>
#include <cmath>
#include <fenv.h>
using namespace std;
const double bug = pow(10.0,pow(10.0,10.0));
int main(void)
{
feenableexcept(-1);
cout << "before" << endl;
cout << 1.0 + 2.0 << endl;
cout << "after" << endl;
return 0;
}
I tried compiling it with both g++ and clang++, but got the same output in both
before
Floating point exception
const double bug = pow(10.0,pow(10.0,10.0)); should be used. Because pow need (double,double) argument and you are passing (int,int)
Once I encountered similar case when floating point error manifested itself at strange places. As I understood this happened because FPU status register is syncronized not during every floating point instruction so error may appear to be random. By the way, I've just compiled and started your program and it finished without any issues.
My solution was to clear FPU status register after faulty calculation (of course this is hack but at that time I couldn't analize that math library).
Related
Consider the following program, which is intended to print a floating point number to three decimal places:
#include <iostream>
#include <string>
#include <sstream>
#include <iomanip>
int main() {
double val = 1.234567890e50;
std::stringstream ss;
ss << std::fixed << std::setprecision(3);
ss << val;
std::cout << ss.str() << std::endl;
return 0;
}
This number cannot be represented exactly as a double, but that is irrelevant now.
On GCC 5.1, the program prints
123456789000000004671007453916432257001527036608512.000
On Embarcadero C++ Builder 10.1 (compiler bcc32c version 3.3.1), the output is:
1.234567890000000047000000000000000000000e+50
Why does the C++ Builder output not match the selected floating point notation, which is std::fixed? Even if the number is 10^300, GCC shows it using the selected notation.
Why do these two compilers work differently? Does the C++ standard define how the string conversion should work in this case?
Embarcadero C++ Builder 10.1 has a bug.
std::setprecision(3); sets the number of digits to display after the decimal separator to exactly 3, irrespective of whether or not the floating point scheme on that platform can represent that number.
GCC5.1 is compliant with this.
Embarcadero C++ Builder 10.1 is not.
See http://en.cppreference.com/w/cpp/io/manip/setprecision, which pretty much proxies the C++ standard.
I'm trying to get the C++ library to generate properly formatted USD output ($ sign, commas for every 1000s place etc).
I'm close, but I cannot get the right alignment to work:
#include <iostream>
#include <iomanip>
#include <locale>
using namespace std;
int main() {
double fiftyMil = 50000000.0; // 50 million bucks
locale myloc;
const money_put<char>& mpUS = use_facet<money_put<char> >(myloc);
cout.imbue(myloc);
cout << showbase << fixed;
cout << "A";
cout.width(30);
cout.setf(std::ios::right);
mpUS.put(cout, false, cout, ' ', fiftyMil * 100); // convert to cents
cout << "B" << endl;
return 0;
}
I'm getting:
A$50,000,000.00 B
I want to get:
A $50,000,000.00B
Any ideas why this isn't working?
I'm using the latest Solaris compiler (12.4)
Update:
It seems like the issue is with the C++ libraries included with the Solaris compiler. This is the workaround I used:
#include <iostream>
#include <iomanip>
#include <locale>
#include <sstream>
using namespace std;
string getFormattedCcy(double amt) {
ostringstream os;
static locale myloc;
static const money_put<char>& mpUS = use_facet<money_put<char> >(myloc);
os.imbue(myloc);
os << showbase << fixed;
mpUS.put(os, false, os, ' ', amt * 100);
return os.str();
}
int main() {
double fiftyMil = 50000000.0; // 50 million bucks
cout << "A";
cout.setf(std::ios::right);
cout.width(30);
cout << getFormattedCcy(fiftyMil);
cout << "B" << endl;
return 0;
}
You have a couple of problems--one with your code, another that looks like its in your implementation.
The problem in your code is pretty trivial. Since you're using a default-constructed locale, it should be using the "C" locale, which shouldn't write out the $ or thousands separators.
That part is easy to fix. Change: locale myloc; to: locale myloc(""); to get a localized locale (so to speak).
I doubt that'll fix the justification problem you're seeing though. That looks to me like it's a problem with the standard library you're using. When I run your code (with the correction above) I get what I'd expect:
A $50,000,000.00B
That's with Visual C++ though (and despite a compiler that conforms fairly poorly, its standard library is about as good as they come).
Also note that right justification is the default, so the line:
cout.setf(std::ios::right);
...should have no effect (but I suspect you knew that, and added it in the hope of getting it to work when it didn't otherwise).
As far as how to get things to work with the Sun Oracle compiler, the most obvious suggestion would probably be to switch standard libraries to one that works better. That leads to another question: whether to try to get a standard library to work with the compiler you're using, or switch to a different compiler such as CLang or gcc. From what I understand, 12.4 was a pretty serious improvement in terms of C++ conformance, but I don't think either the compiler or (apparently) the standard library is really competitive with gcc or Clang yet. OTOH, you may not have a choice, in which case essentially your only route is to build a different standard library with your existing compiler, and hope for the best. If you can't even do that...you could try setting the locale correctly, and just writing the number with std::cout << fiftyMil;, and hope it at least gives you commas as it should, then add the currency sign separately.
As an aside, if you do get an updated (C++11 or newer) library, you can use put_money to simplify the code quite a bit:
#include <iostream>
#include <iomanip>
#include <locale>
using namespace std;
int main() {
double fiftyMil = 50000000.0; // 50 million bucks
std::locale myloc("");
cout.imbue(myloc);
cout << "A" << showbase << setw(30) << put_money(fiftyMil * 100) << "B";
}
Consider the following code which creates a multiprecision floating-point number 'a' by using boost.
How do I use boost library to invoke trigonometric functions?
For example, I hope to calculate sin(a).
#include <iostream>
#include "boost/multiprecision/cpp_bin_float.hpp"
using namespace std;
using namespace boost::multiprecision;
typedef number<backends::cpp_bin_float<24, backends::digit_base_2, void, boost::int16_t, -126, 127>, et_off> float32;
int main (void) {
float32 a("0.5");
return 0;
}
It looks like there is a limitation in the library. When the precision is dropped too low, the sin implementation no longer compiles.
Some intermediate calculations are being done in double precision. The assignment into the result type would be lossy and hence doesn't compile.
Your chosen type actually corresponds to cpp_bin_float_single. That doesn't compile.
As soon as you select cpp_bin_float_double (precision 53 binary digits) or higher, you'll be fine.
I suppose this limitation could be viewed as a bug in some respects. You might report it to the library devs, who will be able to judge whether the related code could use single-precision floats there without hurting the convergence of the sin approximation.
#include <boost/multiprecision/cpp_bin_float.hpp>
#include <iostream>
using namespace std;
using namespace boost::multiprecision;
int main() {
cpp_bin_float_100 a = 1;
cout << setprecision(50) << endl;
cout << sin(a) << endl;
return 0;
}
I've verified digits with Wolfram Mathematica and they are correct:
When I use the atan function from cmath and math on a floating point number, I seem to get different answers:
#include <cmath>
#include <math.h>
#include <iostream>
#include <iomanip>
int main() {
std::cout << std::setprecision(20) << atan(-0.57468467f) << std::endl;
std::cout << std::setprecision(20) << std::atan(-0.57468467f) << std::endl;
// I get:
// -0.52159727580733605823
// -0.52159726619720458984
}
Why does this happen? Does two libraries implement atan differently?
math.h's atan takes a double and returns a double, yet cmath's is overloaded so that a float argument (as used here) will be used as a float and yield a float result. Thus, the difference in output comes from using two different floating-point types. To make them use the same type, either remove the f at the ends of the numbers or change the first atan to atanf.
I'm using _GLIBCXX_DEBUG mode to help find errors in my code but I'm having a problem which I think is an error in the library, but hopefully someone can tell me I'm just doing something wrong. Here is a short example which repro's the problem:
#define _GLIBCXX_DEBUG
#include <iostream>
#include <sstream>
int main (int argc, const char * argv[]) {
std::ostringstream ostr;
ostr << 1.2;
std::cout << "Result: " << ostr.str() << std::endl;
return 0;
}
If I comment out the #define then the output is (as expected):
Result: 1.2
With the _GLIBCXX_DEBUG define in place however the output is simply:
Result:
I've tracked this down to the _M_num_put field of the stream being left as NULL, which causes an exception to be thrown (and caught) in the stream and results in no output for the number. _M_num_put is supposed to be a std::num_put from the locale (I don't claim to understand how that's supposed to work, it's just what I've learned in my searching so far).
I'm running this on a Mac with XCode and have tried it with both "LLVM GCC 4.2" and "Apple LLVM Compiler 3.0" as the compiler with the same results.
I'd appreciate any help in solving this. I want to continue to run with _GLIBCXX_DEBUG mode on my code but this is interfering with that.
Someone else has seen this over at cplusplus.com
and here at stackoverflow, too.
Consensus is that it is a known bug in gcc 4.2 for Mac OS, and since that compiler is no longer being updated, it is unlikely to ever be fixed.
Seems to me that you can either (1) use LLVM, or (2) build your own GCC and use it.