floating exception using icc compiler - c++

I'm compiling my code via the following command:
icc -ltbb test.cxx -o test
Then when I run the program:
time ./mp6 100 > output.modified
Floating exception
4.871u 0.405s 0:05.28 99.8% 0+0k 0+0io 0pf+0w
I get a "Floating exception". This following is code in C++ that I had before the exception and after:
// before
if (j < E[i]) {
temp += foo(0, trr[i], ex[i+j*N]);
}
// after
temp += (j < E[i])*foo(0, trr[i], ex[i+j*N]);
This is boolean algebra... so (j < E[i]) is either going to be a 0 or a 1 so the multiplication would result either in 0 or the foo() result. I don't see why this would cause a floating exception.
This is what foo() does:
int foo(int s, int t, int e) {
switch(s % 4) {
case 0:
return abs(t - e)/e;
case 1:
return (t == e) ? 0 : 1;
case 2:
return (t < e) ? 5 : (t - e)/t;
case 3:
return abs(t - e)/t;
}
return 0;
}
foo() isn't a function I wrote so I'm not too sure as to what it does... but I don't think the problem is with the function foo(). Is there something about boolean algebra that I don't understand or something that works differently in C++ than I know of? Any ideas why this causes an exception?
Thanks,
Hristo

You are almost certainly dividing by zero in foo.
A simple program of
int main()
{
int bad = 0;
return 25/bad;
}
also prints
Floating point exception
on my system.
So, you should check whether e is 0 when s % 4 is zero, or whether t is 0 when s % 4 is 2 or 3. Then return whatever value makes sense for your situation instead of trying to divide by zero.
#hristo: C++ will still evaluate the right-hand-side of a multiplication even if the left-hand-side is zero. It doesn't matter that the result should be zero; it matters that foo was called and evaluated and caused an error.
Sample source:
#include <iostream>
int maybe_cause_exception(bool cause_it)
{
int divisor = cause_it ? 0 : 10;
return 10 / divisor;
}
int main()
{
std::cout << "Do not raise exception: " << maybe_cause_exception(false) << std::endl;
int x = 0;
std::cout << "Before 'if' statement..." << std::endl;
if(x)
{
std::cout << "Inside if: " << maybe_cause_exception(true) << std::endl;
}
std::cout << "Past 'if' statement." << std::endl;
std::cout << "Cause exception: " << x * maybe_cause_exception(true) << std::endl;
return 0;
}
Output:
Do not raise exception: 1
Before 'if' statement...
Past 'if' statement.
Floating point exception

Is it possible you are dividing by 0? It could be that an integer division by 0 is surfacing as a "Floating exception".
When you have the if, the computation isn't done if a division by 0 would happen. When you do the "Boolean algebra", the computation is done regardless, resulting in a divide by 0 error.
You're thinking that it will be temp += 0*foo(...); so it doesn't need to call foo (because 0 times anything will always be 0), but that's not how the compiler works. Both sides of a * have to be evaluated.

While I do not tell you the exact cause of your floating-point exception, I can provide some information you might find useful in investigating future floating-point errors. I believe Mark has already shed some light on why you are having this particular problem.
The most portable way of determining if a floating-point exception condition has occurred and its cause is to use the floating-point exception facilities provided by C99 in fenv.h. There are 11 functions defined in fenv.h for manipulating the floating-point environment (see fenv(3) man page). You may also find this article to be of interest.
On POSIX compliant systems, SIGFPE is sent to a process when in performs an erroneous arithmetic operation and this does not necessarily involve floating-point arithmetic. If the SIGFPE signal is handled and SA_SIGINFO is specified in the sa_flags for the call to sigaction(2), the si_code member of the siginfo_t structure should specify the reason for the fault.
From the wikipedia SIGFPE article:
A common oversight is to consider division by zero the only source of SIGFPE conditions. On some architectures (IA-32 included[citation needed]), integer division of INT_MIN, the smallest representable negative integer value, by −1 triggers the signal because the quotient, a positive number, is not representable.

When I suggested replacing branch with multiplication by one or zero, I did not consider that if statements may be guarding against numerical exception. multiplication trick still evaluates expression, but effectively throws it away. for small enough expression, such trick is better than conditional, but you have to make sure expression can evaluate.
You can still use multiplication trick, if you slightly transform denominator.
instead of x/t use x/(t + !t) which does not affect anything if denominator is nonzero (you are adding zero then)but allows denominator t = 0 to be compute, and then thrown away by multiplying by zero.
And sorry, but be careful with my suggestions, I do not know all details of your program. plus, I tend to go wild about replacing branches with "clever" Boolean expressions

Related

pow() function gives an error [duplicate]

Recently i write a block of code:
const int sections = 10;
for(int t= 0; t < 5; t++){
int i = pow(sections, 5- t -1);
cout << i << endl;
}
And the result is wrong:
9999
1000
99
10
1
If i using just this code:
for(int t = 0; t < 5; t++){
cout << pow(sections,5-t-1) << endl;
}
The problem doesn't occur anymore:
10000
1000
100
10
1
Does anyone give me an explaination? thanks you very much!
Due to the representation of floating point values pow(10.0, 5) could be 9999.9999999 or something like this. When you assign that to an integer that got truncated.
EDIT: In case of cout << pow(10.0, 5); it looks like the output is rounded, but I don't have any supporting document right now confirming that.
EDIT 2: The comment made by BoBTFish and this question confirms that when pow(10.0, 5) is used directly in cout that is getting rounded.
When used with fractional exponents, pow(x,y) is commonly evaluated as exp(log(x)*y); such a formula would mathematically correct if evaluated with infinite precision, but may in practice result in rounding errors. As others have noted, a value of 9999.999999999 when cast to an integer will yield 9999. Some languages and libraries use such a formulation all the time when using an exponentiation operator with a floating-point exponent; others try to identify when the exponent is an integer and use iterated multiplication when appropriate. Looking up documentation for the pow function, it appears that it's supposed to work when x is negative and y has no fractional part (when x is negative and `y is even, the result should be pow(-x,y); when y is odd, the result should be -pow(-x,y). It would seem logical that when y has no fractional part a library which is going to go through the trouble of dealing with a negative x value should use iterated multiplication, but I don't know of any spec dictating that it must.
In any case, if you are trying to raise an integer to a power, it is almost certainly best to use integer maths for the computation or, if the integer to be raised is a constant or will always be small, simply use a lookup table (raising numbers from 0 to 15 by any power that would fit in a 64-bit integer would require only a 4,096-item table).
From Here
Looking at the pow() function: double pow (double base, double exponent); we know the parameters and return value are all double type. But the variable num, i and res are all int type in code above, when tranforming int to double or double to int, it may cause precision loss. For example (maybe not rigorous), the floating point unit (FPU) calculate pow(10, 4)=9999.99999999, then int(9999.9999999)=9999 by type transform in C++.
How to solve it?
Solution1
Change the code:
const int num = 10;
for(int i = 0; i < 5; ++i){
double res = pow(num, i);
cout << res << endl;
}
Solution2
Replace floating point unit (FPU) having higher calculation precision in double type. For example, we use SSE in Windows CPU. In Code::Block 13.12, we can do this steps to reach the goal: Setting -> Compiler setting -> GNU GCC Compile -> Other options, add
-mfpmath=sse -msse3
The picture is as follows:
(source: qiniudn.com)
Whats happens is the pow function returns a double so
when you do this
int i = pow(sections, 5- t -1);
the decimal .99999 cuts of and you get 9999.
while printing directly or comparing it with 10000 is not a problem because it is runded of in a sense.
If the code in your first example is the exact code you're running, then you have a buggy library. Regardless of whether you're picking up std::pow or C's pow which takes doubles, even if the double version is chosen, 10 is exactly representable as a double. As such the exponentiation is exactly representable as a double. No rounding or truncation or anything like that should occur.
With g++ 4.5 I couldn't reproduce your (strange) behavior even using -ffast-math and -O3.
Now what I suspect is happening is that sections is not being assigned the literal 10 directly but instead is being read or computed internally such that its value is something like 9.9999999999999, which when raised to the fourth power generates a number like 9999.9999999. This is then truncated to the integer 9999 which is displayed.
Depending on your needs you may want to round either the source number or the final number prior to assignment into an int. For example: int i = pow(sections, 5- t -1) + 0.5; // Add 0.5 and truncate to round to nearest.
There must be some broken pow function in the global namespace. Then std::pow is "automatically" used instead in your second example because of ADL.
Either that or t is actually a floating-point quantity in your first example, and you're running into rounding errors.
You're assigning the result to an int. That coerces it, truncating the number.
This should work fine:
for(int t= 0; t < 5; t++){
double i = pow(sections, 5- t -1);
cout << i << endl;
}
What happens is that your answer is actually 99.9999 and not exactly 100. This is because pow is double. So, you can fix this by using i = ceil(pow()).
Your code should be:
const int sections = 10;
for(int t= 0; t < 5; t++){
int i = ceil(pow(sections, 5- t -1));
cout << i << endl;
}

C++ Modulus returning wrong answer

Here is my code :
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int n, i, num, m, k = 0;
cout << "Enter a number :\n";
cin >> num;
n = log10(num);
while (n > 0) {
i = pow(10, n);
m = num / i;
k = k + pow(m, 3);
num = num % i;
--n;
cout << m << endl;
cout << num << endl;
}
k = k + pow(num, 3);
return 0;
}
When I input 111 it gives me this
1
12
1
2
I am using codeblocks. I don't know what is wrong.
Whenever I use pow expecting an integer result, I add .5 so I use (int)(pow(10,m)+.5) instead of letting the compiler automatically convert pow(10,m) to an int.
I have read many places telling me others have done exhaustive tests of some of the situations in which I add that .5 and found zero cases where it makes a difference. But accurately identifying the conditions in which it isn't needed can be quite hard. Using it when it isn't needed does no real harm.
If it makes a difference, it is a difference you want. If it doesn't make a difference, it had a tiny cost.
In the posted code, I would adjust every call to pow that way, not just the one I used as an example.
There is no equally easy fix for your use of log10, but it may be subject to the same problem. Since you expect a non integer answer and want that non integer answer truncated down to an integer, adding .5 would be very wrong. So you may need to find some more complicated work around for the fundamental problem of working with floating point. I'm not certain, but assuming 32-bit integers, I think adding 1e-10 to the result of log10 before converting to int is both never enough to change log10(10^n-1) into log10(10^n) but always enough to correct the error that might have done the reverse.
pow does floating-point exponentiation.
Floating point functions and operations are inexact, you cannot ever rely on them to give you the exact value that they would appear to compute, unless you are an expert on the fine details of IEEE floating point representations and the guarantees given by your library functions.
(and furthermore, floating-point numbers might even be incapable of representing the integers you want exactly)
This is particularly problematic when you convert the result to an integer, because the result is truncated to zero: int x = 0.999999; sets x == 0, not x == 1. Even the tiniest error in the wrong direction completely spoils the result.
You could round to the nearest integer, but that has problems too; e.g. with sufficiently large numbers, your floating point numbers might not have enough precision to be near the result you want. Or if you do enough operations (or unstable operations) with the floating point numbers, the errors can accumulate to the point you get the wrong nearest integer.
If you want to do exact, integer arithmetic, then you should use functions that do so. e.g. write your own ipow function that computes integer exponentiation without any floating-point operations at all.

Divide by zero prevention

What is 1.#INF and why does casting to a float or double prevent a division by 0 of crashing?
Also, any great ideas of how to prevent division by 0? (Like any macro or template)?
int nQuota = 0;
int nZero = 3 / nQuota; //crash
cout << nZero << endl;
float fZero = 2 / nQuota; //crash
cout << fZero << endl;
if I use instead:
int nZero = 3 / (float)nQuota;
cout << nZero << endl;
//Output = -2147483648
float fZero = 2 / (float)nQuota;
cout << fZero << endl;
//Output = 1.#INF
1.#INF is positive infinity. You will get it when you divide a positive float by zero (if you divide the float zero itself by zero, then the result will be "not a number").
On the other hand, if you divide an integer by zero, the program will crash.
The reason float fZero = 2 / nQuota; crashes is because both operands of the / operator are integers, so the division is performed on integers. It doesn't matter that you then store the result in a float; C++ has no notion of target typing.
Why positive infinity cast to an integer is the smallest integer, I have no idea.
Wwhy using (float) or (double) prevents a division by 0 of crashing?
It doesn't necessarily. The standard is amazing spare when it comes to floating point. Most systems use the IEEE floating point standard nowadays, and that says that the default action for division by zero is to return ±infinity rather than crash. You can make it crash by enabling the appropriate floating point exceptions.
Note well: The only thing the floating point exception model and the C++ exception model have in common is the word "exception". On every machine I work on, a floating point exception does not throw a C++ exception.
Also, any great ideas of how to prevent division by 0?
Simple answer: Don't do it.
This is one of those "Doctor, doctor it hurts when I do this!" kinds of situations. So don't do it!
Make sure that the divisor is not zero.
Do sanity checks on divisors that are user inputs. Always filter your user inputs for sanity. A user input value of zero when the number is supposed to be in the millions will cause all kinds of havoc besides overflow. Do sanity checks on intermediate values.
Enable floating point exceptions.
Making the default behavior to allow errors (and these are almost always errors) to go through unchecked was IMHO a big mistake on the part of the standards committee. Use the default and those infinities and not-a-numbers will eventually turn everything into an Inf or a NaN.
The default should have been to stop floating point errors in their tracks, with an option to allow things like 1.0/0.0 and 0.0/0.0 to take place. That isn't the case, so you have to enable those traps. Do that and you can oftentimes find the cause of the problem in short order.
Write custom divide, custom multiply, custom square root, custom sine, ... functions.
This unfortunately is the route that many safety critical software systems must take. It is a royal pain. Option #1 is out because it's just wishful thinking. Option #3 is out because the system cannot be allowed to crash. Option #2 is still a good idea, but it doesn't always work because bad data always has a way of sneaking in. It's Murphy's law.
BTW, the problem is a bit worse than just division by zero. 10200/10-200 will also overflow.
You usually check to make sure you aren't dividing by zero. The code below isn't particularly useful unless nQuota has a legitimate value but it does prevent crashes
int nQuota = 0;
int nZero = 0;
float fZero = 0;
if (nQuota)
nZero = 3 / nQuota;
cout << nZero << endl;
if (nQuota)
fZero = 2 / nQuota;
cout << fZero << endl;

Why pow(10,5) = 9,999 in C++

Recently i write a block of code:
const int sections = 10;
for(int t= 0; t < 5; t++){
int i = pow(sections, 5- t -1);
cout << i << endl;
}
And the result is wrong:
9999
1000
99
10
1
If i using just this code:
for(int t = 0; t < 5; t++){
cout << pow(sections,5-t-1) << endl;
}
The problem doesn't occur anymore:
10000
1000
100
10
1
Does anyone give me an explaination? thanks you very much!
Due to the representation of floating point values pow(10.0, 5) could be 9999.9999999 or something like this. When you assign that to an integer that got truncated.
EDIT: In case of cout << pow(10.0, 5); it looks like the output is rounded, but I don't have any supporting document right now confirming that.
EDIT 2: The comment made by BoBTFish and this question confirms that when pow(10.0, 5) is used directly in cout that is getting rounded.
When used with fractional exponents, pow(x,y) is commonly evaluated as exp(log(x)*y); such a formula would mathematically correct if evaluated with infinite precision, but may in practice result in rounding errors. As others have noted, a value of 9999.999999999 when cast to an integer will yield 9999. Some languages and libraries use such a formulation all the time when using an exponentiation operator with a floating-point exponent; others try to identify when the exponent is an integer and use iterated multiplication when appropriate. Looking up documentation for the pow function, it appears that it's supposed to work when x is negative and y has no fractional part (when x is negative and `y is even, the result should be pow(-x,y); when y is odd, the result should be -pow(-x,y). It would seem logical that when y has no fractional part a library which is going to go through the trouble of dealing with a negative x value should use iterated multiplication, but I don't know of any spec dictating that it must.
In any case, if you are trying to raise an integer to a power, it is almost certainly best to use integer maths for the computation or, if the integer to be raised is a constant or will always be small, simply use a lookup table (raising numbers from 0 to 15 by any power that would fit in a 64-bit integer would require only a 4,096-item table).
From Here
Looking at the pow() function: double pow (double base, double exponent); we know the parameters and return value are all double type. But the variable num, i and res are all int type in code above, when tranforming int to double or double to int, it may cause precision loss. For example (maybe not rigorous), the floating point unit (FPU) calculate pow(10, 4)=9999.99999999, then int(9999.9999999)=9999 by type transform in C++.
How to solve it?
Solution1
Change the code:
const int num = 10;
for(int i = 0; i < 5; ++i){
double res = pow(num, i);
cout << res << endl;
}
Solution2
Replace floating point unit (FPU) having higher calculation precision in double type. For example, we use SSE in Windows CPU. In Code::Block 13.12, we can do this steps to reach the goal: Setting -> Compiler setting -> GNU GCC Compile -> Other options, add
-mfpmath=sse -msse3
The picture is as follows:
(source: qiniudn.com)
Whats happens is the pow function returns a double so
when you do this
int i = pow(sections, 5- t -1);
the decimal .99999 cuts of and you get 9999.
while printing directly or comparing it with 10000 is not a problem because it is runded of in a sense.
If the code in your first example is the exact code you're running, then you have a buggy library. Regardless of whether you're picking up std::pow or C's pow which takes doubles, even if the double version is chosen, 10 is exactly representable as a double. As such the exponentiation is exactly representable as a double. No rounding or truncation or anything like that should occur.
With g++ 4.5 I couldn't reproduce your (strange) behavior even using -ffast-math and -O3.
Now what I suspect is happening is that sections is not being assigned the literal 10 directly but instead is being read or computed internally such that its value is something like 9.9999999999999, which when raised to the fourth power generates a number like 9999.9999999. This is then truncated to the integer 9999 which is displayed.
Depending on your needs you may want to round either the source number or the final number prior to assignment into an int. For example: int i = pow(sections, 5- t -1) + 0.5; // Add 0.5 and truncate to round to nearest.
There must be some broken pow function in the global namespace. Then std::pow is "automatically" used instead in your second example because of ADL.
Either that or t is actually a floating-point quantity in your first example, and you're running into rounding errors.
You're assigning the result to an int. That coerces it, truncating the number.
This should work fine:
for(int t= 0; t < 5; t++){
double i = pow(sections, 5- t -1);
cout << i << endl;
}
What happens is that your answer is actually 99.9999 and not exactly 100. This is because pow is double. So, you can fix this by using i = ceil(pow()).
Your code should be:
const int sections = 10;
for(int t= 0; t < 5; t++){
int i = ceil(pow(sections, 5- t -1));
cout << i << endl;
}

c++ division by 0

I am running long simulations. I record the results into a vector to compute statistics about the data. I realized that, in theory, those samples could be the result of a division by zero; this is only theoretical, I am pretty sure it's not the case. In order to avoid rerunning the simulation after modifying the code, I was wondering what happens in that case. Would I be able to realize whether a division by 0 has occurred or not? Will I get error messages? (Exceptions are not being handled at the moment).
Thanks
For IEEE floats, division of a finite nonzero float by 0 is well-defined and results in +infinity (if the value was >zero) or -infinity (if the value was less than zero). The result of 0.0/0.0 is NaN. If you use integers, the behaviour is undefined.
Note that C standard says (6.5.5):
The result of the / operator is the quotient from the division of
the first operand by the second; the result of the % operator is the
remainder. In both operations, if the value of the second operand is
zero, the behavior is undefined.
So something/0 is undefined (by the standard) both for integral types and Floating points. Nevertheless most implementations have fore mentioned behavior (+-INF or NAN).
If you're talking integers then your program should crash upon division by zero.
If you're talking floats then division by zero is allowed and the result to that is INF or -INF. Now it's all up to your code if the program will crash, handle that nicely or continue with undefined/unexpected results.
If you use IEEE floats, then it will return 0 or NaN. If the op1 is 0, you will get undefined. If op1 is higher than 0, you will get Infinity. If op1 is lower than 0, then you will get -Infinity. If you use dividing by 0 directly or in integer , you will get error "Floating point exception".
#include <iostream>
#include <math.h>
using namespace std;
int main()
{
double a = 123, b = 0;
double result = a/b;
string isInfinite = isinf(result) ? "is" : "is not";
cout << "result=" << result << " " << isInfinite << " infinity" << endl;
}
result=inf is infinity