I'm trying to take the 11th root of an expression and I'm getting a return of -inf.
std::cout << pow(j,(1.0/11.0)) << std::endl;
where j is just some log expression. I've checked that number to make sure it's valid, and it is. I'm thinking it's the way that the power expression is being run. Is there a better way to do this? Thanks.
And yes, I've included cmath into my work.
I can't think of a valid reason for pow to return -inf, if your inputs are marginally sane. However in case you're passing in a negative number, something that may be worth trying is:
if(j==0) return 0;
if(j<0) return -pow(-j, 1.0/11.0);
return pow(j,1.0/11.0);
try to look for FPU errors
the most common is forgotten return of float/double in some function
which leads to problems on FPU stack which is really small.
also you can try add this before pow
asm { fninit; };
this resets the FPU so if you have problems on stack it will help
but of course do not do this in middle of some FPU computation
it would destroy its result
if you are not on x87 platform than this will not help
the value of j before crash will be a good start to share with us.
try to store the result of pow to some float/double variable
cout that variable not temporary heap memory location
if it prints -inf look also inside that variable if it is also -inf
(could be something wrong with the cout not pow ... )
minimize your code (turn off everything part by part)
and see if the problems is suddenly not there
hidden memory leaks and code overwrites are evil ...
Let us know what you have found.
Related
I have a program that behaves weirdly and probably has undefined behaviour. Sometimes, the return address of a function seems to be changed, and I don't know what's causing it.
The return address is always changed to the same address, an assertion inside a function the control shouldn't be able to reach. I've been able to stop the program with a debugger to see that when it's supposed to execute a return statement, it jumps straight to the line with the assertion instead.
This code approximates how my function works.
int foo(Vector t)
double sum = 0;
for(unsgined int i=0; i<t.size();++i){
sum += t[i];
}
double limit = bar(); // bar returns a value between 0 and 1
double a=0;
for(double i=0; i<10; i++){
a += f(i)/sum; // f(1)/sum + ... + f(10)/sum = 1.0f
if(a>3)return a;
}
//shoudn'get here
assert(false); // ... then this line is executed
}
This is what I've tried so far:
Switching all std::vector [] operators with .at to prevent accidentily writing into memory
Made sure all return-by-value values are const.
Switched on -Wall and -Werror and -pedantic-errors in gcc
Ran the program with valgrind
I get a couple of invalid read of size 8, but they seem to originate from qt, so I'm not sure what to make of it. Could this be the problem?
The error happens only occasionally when I have run the program for a while and give it certain input values, and more often in a release build than in a debug build.
EDIT:
So I managed to reproduce the problem in a console application (no qt loaded) I then manages to simulate events that caused the problem.
Like some of you suggested, it turns out I misjudged what was actually causing it to reach the assertion, probably due to my lack of experience with qt's debugger. The actual problem was a floating point error in the double i used as a loop condition.
I was implementing softmax, but exp(x) got rounded to zero with particular inputs.
Now, as I have solved the problem, I might rephrase it. Is there a method for checking problems like rounding errors automatically. I.e breaking on 0/0 for instance?
The short answer is:
The most portable way of determining if a floating-point exceptional condition has occurred is to use the floating-point exception facilities provided by C in fenv.h.
although, unfortunately, this is far from being perfect.
I suggest you to read both
https://www.securecoding.cert.org/confluence/display/seccode/FLP04-C.+Check+floating-point+inputs+for+exceptional+values
and
https://www.securecoding.cert.org/confluence/display/seccode/FLP03-C.+Detect+and+handle+floating-point+errors
which concisely address the exact question you are posing:
Is there a method for checking problems like rounding errors automatically.
Is it better to declare and initialize the variable or just declare it?
What's the best and the most efficient way?
For example, I have this code:
#include <stdio.h>
int main()
{
int number = 0;
printf("Enter with a number: ");
scanf("%d", &number);
if(number < 0)
number= -number;
printf("The modulo is: %d\n", number);
return 0;
}
If I don't initialize number, the code works fine, but I want to know, is it faster, better, more efficient? Is it good to initialize the variable?
scanf can fail, in which case nothing is written to number. So if you want your code to be correct you need to initialize it (or check the return value of scanf).
The speed of incorrect code is usually irrelevant, but for you example code if there is a difference in speed at all then I doubt you would ever be able to measure it. Setting an int to 0 is much faster than I/O.
Don't attribute speed to language; That attribute belongs to implementations of language. There are fast implementations and slow implementations. There are optimisations assosciated with fast implementations; A compiler that produces well-optimised machine code would optimise the initialisation away if it can deduce that it doesn't need the initialisation.
In this case, it actually does need the initialisation. Consider if scanf were to fail. When scanf fails, it's return value reflects this failure. It'll either return:
A value less than zero if there was a read error or EOF (which can be triggered in an implementation-defined way, typically CTRL+Z on Windows and CTRL+d on Linux),
A number less than the number of objects provided to scanf (since you've provided only one object, this failure return value would be 0) when a conversion failure occurs (for example, entering 'a' on stdin when you've told scanf to convert sequences of '0'..'9' into an integer),
The number of objects scanf managed to assign to. This is 1, in your case.
Since you aren't checking for any of these return values (particular #3), your compiler can't deduce that the initialisation is necessary and hence, can't optimise it away. When the variable is uninitialised, failure to check these return values results in undefined behaviour. A chicken might appear to be living, even when it is missing its head. It would be best to check the return value of scanf. That way, when your variable is uninitialised you can avoid using an uninitialised value, and when it isn't your compiler can optimise away the initialisations, presuming you handle erroneous return values by producing error messages rather than using the variable.
edit: On that topic of undefined behaviour, consider what happens in this code:
if(number < 0)
number= -number;
If number is -32768, and INT_MAX is 32767, then section 6.5, paragraph 5 of the C standard applies because -(-32768) isn't representable as an int.
Section 6.5, paragraph 5 says:
If an exceptional condition occurs during the evaluation of an
expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined.
Suppose if you don't initialize a variable and your code is buggy.(e.g. you forgot to read number). Then uninitialized value of number is garbage and different run will output(or behave) different results.
But If you initialize all of your variables then it will produce constant result. An easy to trace error.
Yes, initialize steps will add extra steps in your code at low level. for example mov $0, 28(%esp) in your code at low level. But its one time task. doesn't kill your code efficiency.
So, always using initialization is a good practice!
With modern compilers, there isn't going to be any difference in efficiency. Coding style is the main consideration. In general, your code is more self-explanatory and less likely to have mistakes if you initialize all variables upon declaring them. In the case you gave, though, since the variable is effectively initialized by the scanf, I'd consider it better not to have a redundant initialization.
Before, you need to answer to this questions:
1) how many time is called this function? if you call 10.000.000 times, so, it's a good idea to have the best.
2) If I don't inizialize my variable, I'm sure that my code is safe and not throw any exception?
After, an int inizialization doesn't change so much in your code, but a string inizialization yes.
Be sure that you do all the controls, because if you have a not-inizialized variable your program is potentially buggy.
I can't tell you how many times I've seen simple errors because a programmer doesn't initialize a variable. Just two days ago there was another question on SO where the end result of the issue being faced was simply that the OP didn't initialize a variable and thus there were problems.
When you talk about "speed" and "efficiency" don't simply consider how much faster the code might compile or run (and in this case it's pretty much irrelevant anyway) but consider your debugging time when there's a simple mistake in the code do to the fact you didn't initialize a variable that very easily could have been.
Note also, my experience is when coding for larger corporations they will run your code through tools like coverity or klocwork which will ding you for uninitialized variables because they present a security risk.
I have a simple piece of code that extracts a float from a FORTRAN-generated REAL array, and then inserts it into a stream for logging. Although this works for the first 30 cases, on the 31st it crashes with a "Floating-point invalid operation".
The code is:
int FunctionDeclaration(float* mrSwap)
{
...
float swap_float;
stringstream message_stream;
...
swap_float = *(mrSwap+30-1);
...
message_stream.clear();
message_stream << 30 << "\t" << swap_float << "\tblah blah blah \t";
When debugging, the value of swap_float the instance before the crash (on the last line, above) is 1711696.3 - other than this being much larger than most of the values up until this point, there is nothing particularly special about it.
I have also tried replacing message_stream with cerr, and got the same problem. I had hitherto believed cerr to be pretty much indestructable - how can a simple float destroy it?
Edit:
Thanks for the comments: I've added the declaration of mrSwap. mrSwap is approximately 200 long, so I'm a long way off the end. It is populated outside of my control, and individual entries may not be populated - but to the best of my understanding, this would just mean that swap_float would be set to a random float?
individual entries may not be populated - but to the best of my
understanding, this would just mean that swap_float would be set to a
random float?
Emphatically not. Certain bit patterns in an IEEE floating-point number indicate an invalid number -- for instance, the result of an overflowing arithmetic operation, or an invalid one (such as 0.0/0.0). The puzzling thing here is that the debugger apparently accepts the number as valid, while cout doesn't.
Try getting the bit layout of swap_float. On a 32-bit system:
int i = *(int*)&swap_float;
Then print i in hexadecimal, and let us know what you see.
Updated to add: From Mike's comment, i=1238430338, which is 49D0F282 in hex. This is a valid floating-point number, equal to exactly 1711696.25. So I don't know what's going on, I'm afraid. The only thing I can suggest is that maybe the compiler is loading the invalid floating-point number directly from the mrSwap array into the floating-point register bank, without going through swapFloat. So the true value of swapFloat is simply not available to the debugger. To check this, try
int j = *(int*)(mrSwap+30-1);
and tell us what you see.
Updated again to add: Another possibility is a delayed floating-point trap. The floating-point co-processor (built into the CPU these days) generates a floating-point interrupt because of some illegal operation, but the interrupt doesn't get noticed until the next floating-point operation is attempted. So this crash might be a result of the previous floating-point operation, which could be anywhere. Good luck with that...
I'm just adding this answer to highlight the correct solution within TonyK's answer above - because we did a few loops, the answer has been edited, and because several salient points are within the comments, the actual answer may not be immediately apparent. All credit should go to TonyK for the solution.
"Another possibility is a delayed floating-point trap. The floating-point co-processor (built into the CPU these days) generates a floating-point interrupt because of some illegal operation, but the interrupt doesn't get noticed until the next floating-point operation is attempted. So this crash might be a result of the previous floating-point operation, which could be anywhere." - TonyK
This was indeed the problem: in my comparison using IsSame, the other value was NaN (this is a valid value in this context), and although it happily subtracted it from swap_float, it put a flag in saying to report the next operation as an error. I have to say that I was completely unaware that that was possible - I thought that if it worked, it worked.
We have some code that looks like this:
inline int calc_something(double x) {
if (x > 0.0) {
// do something
return 1;
} else {
// do something else
return 0;
}
}
Unfortunately, when using the flag /fp:fast, we get calc_something(0)==1 so we are clearly taking the wrong code path. This only happens when we use the method at multiple points in our code with different parameters, so I think there is some fishy optimization going on here from the compiler (Microsoft Visual Studio 2008, SP1).
Also, the above problem goes away when we change the interface to
inline int calc_something(const double& x) {
But I have no idea why this fixes the strange behaviour. Can anyone explane this behaviour? If I cannot understand what's going on we will have to remove the /fp:fastswitch, but this would make our application quite a bit slower.
I'm not familiar enough with FPUs to comment with any certainty, but my guess would be that the compiler is letting an existing value that it thinks should be equal to x sit in on that comparison. Maybe you go y = x + 20.; y = y - 20; y is already on the FP stack, so rather than load x the compiler just compares against y. But due to rounding errors, y isn't quite 0.0 like it is supposed to be, and you get the odd results you see.
For a better explanation: Why is cos(x) != cos(y) even though x == y? from the C++FAQ lite. This is part of what I'm trying to get across, I just couldn't remember where exactly I had read it until just now.
Changing to a const reference fixes this because the compiler is worried about aliasing. It forces a load from x because it can't assume its value hasn't changed at some point after creating y, and since x is actually exactly 0.0 [which is representable in every floating point format I'm familiar with] the rounding errors vanish.
I'm pretty sure MS provides a pragma that allows you to set the FP flags on a per-function basis. Or you could move this routine to a separate file and give that file custom flags. Either way, it could prevent your whole program from suffering just to keep that one routine happy.
what are the results of calc_something(0L), or calc_something(0.0f) ? It could be linked to the size of the types before casting. An integer is 4 bytes, a double is 8.
Have you tried looking at the asembled code, to see how the aforementioned conversion is done ?
Googling for 'fp fast', I found this post [social.msdn.microsoft.com]
As I've said in other question, compilers suck at generating floating point code. The article Dennis links to explains the problems well. Here's another: An MSDN article.
If the performance of the code is important, you can easily1 out-perform the compiler by writing your own assembler code. If your algoritm is vectorisable then you can make use of SIMD too (with a slight loss of precision though).
Assuming you understand the way the FPU works.
inline int calc_something(double x) will (probably) use an 80 bits register. inline int calc_something(const double& x) would store the double in memory, where it takes 64 bits. That at least explains the difference between the two.
However, I find your test quite fishy to begin with. The results of calc_something are extremely sensitive to rounding of its input. Your FP algorithms should be robust to rounding. calc_something(1.0-(1.0/3.0)*3) should be the same as calc_something(0.0).
I think the behavior is correct.
You never compare a floating point number up to less than the holding type's precision.
Something that comes from zero may be equal, greater or less than another zero.
See http://floating-point-gui.de/
So I have a function that looks something like this:
float function(){
float x = SomeValue;
return x / SomeOtherValue;
}
At some point, this function overflows and returns a really large negative value. To try and track down exactly where this was happening, I added a cout statement so that the function looked like this:
float function(){
float x = SomeValue;
cout << x;
return x / SomeOtherValue;
}
and it worked! Of course, I solved the problem altogether by using a double. But I'm curious as to why the function worked properly when I couted it. Is this typical, or could there be a bug somewhere else that I'm missing?
(If it's any help, the value stored in the float is just an integer value, and not a particularly big one. I just put it in a float to avoid casting.)
Welcome to the wonderful world of floating point. The answer you get will likely depend on the floating point model you compiled the code with.
This happens because of the difference between the IEEE spec and the hardware the code is running on. Your CPU likely has 80 bit floating point registers that get use to hold the 32-bit float value. This means that there is far more precision while the value stays in a register than when it is forced to a memory address (also known as 'homing' the register).
When you passed the value to cout the compiler had to write the floating point to memory, and this results in a lost of precision and interesting behaviour WRT overflow cases.
See the MSDN documentation on VC++ floating point switches. You could try compiling with /fp:strict and seeing what happens.
Printing a value to cout should not change the value of the paramter in any way at all.
However, I have seen similar behaviour, adding debugging statements causes a change in the value. In those cases, and probably this one as well my guess was that the additional statements were causing the compiler's optimizer to behave differently, so generate different code for your function.
Adding the cout statement means that the vaue of x is used directly. Without it the optimizer could remove the variable, so changing the order of the calculation and therefore changing the answer.
As an aside, it's always a good idea to declare immutable variables using const:
float function(){
const float x = SomeValue;
cout << x;
return x / SomeOtherValue;
}
Among other things this will prevent you from unintentionally passing your variables to functions that may modify them via non-const references.
cout causes a reference to the variable, which often will cause the compiler to force it to spill it to the stack.
Because it is a float, this likely causes its value to be truncated from the double or long double representation it would normally have.
Calling any function (non-inlined) that takes a pointer or reference to x should end up causing the same behavior, but if the compiler later gets smarter and learns to inline it, you'll be equally screwed :)
I dont think the cout has any effect on the variable, the problem would have to be somewhere else.