I have, in short, this code:
vec3 contribX1 = Sample(O, D, 0);
if (std::isinf(contribX1.x)){
..do something..
}
According to my debug I have sometimes an infinite value that is returned by the Sample method and I have to solve it. But before doing it, I need the tools to debug properly. So I have been looking around and I found this std::isinf() that should return me a bool. Unfortunately it seems I never enter that IF condition, even if right after I am able to check the contribX1.x and it actually is 1.#INF0000. What am I doing wrong?
EDIT: The compiler is cl.exe.. I am using Visual Studio 2013
You can use isfinite to test whether the value is a valid and non-continuous (i.e. inifinite) value:
if (!std::isfinite(contribX1.x)){
should work for you, I think the issue here is that there are various values used to represent positive and negative infinite values along with NaN, in your situation I think using this test should be fine.
I don't know your platform but for Windows this related question is what helped me: std::isfinite on MSVC
Related
I'm having trouble with some code from a C++ project. It includes the std::chrono library and keeps breaking at the following assertion:
static_assert(system_clock::duration::min() < system_clock::duration::zero(), "a clock's minimum duration cannot be less than its epoch");
The assert breaks the code in both a Debian machine with g++ 6.3.0 and in a PC with Windows 10, CygWin and g++ 7.3.0.
I've also tried in an online C++ compiler a simple example including the chrono library, which by itself does not give any problems, but when comparing manually the minimum and zero duration of the chrono system clock gives the result that should trigger the assert as well.
I've searched about the issue and found some clues leading to some related problems caused by the TZ posix variable that holds timezone info. Tried unsetting and setting it to its right value, yet it had no effects on the assert.
I'd appreciate any pointers or suggestions.
Edit: While std::chrono::milliseconds::zero() has a (as expected) value of 0, the value of std::chrono::milliseconds::min() is -9223372036854775808, or -2^63 which i think is the minimum possible value for a long long value (possible overflow?).
After some tests i realized the assert was being triggered in both systems only when using the g++ through the testing software being used, since the same code compiled outside it did not fail the assertion with the same compilers.
It turns out that the software uses the EDG parser and it needs the option --64_bit_target to avoid triggering the assert. Unfortunately, no information about the option exists in the parser documentation, so i can't know the reason why this issue happens without it.
Probably the question doesn't have much value now, but i didn't want to delete it since people already wrote answers that may be of interest to someone.
A duration can be negative, as you found with the highly negative value of …::min(). The assertion is incorrect, almost like asserting that -1 must be greater than zero.
The C++17 spec declares an abs() function for finding an absolute duration, and discusses its applicability with signed and unsigned representations:
23.17.5.9 duration algorithms [time.duration.alg]
template <class Rep, class Period> constexpr duration<Rep, Period> abs(duration<Rep, Period> d);
1 Remarks: This function shall not participate in overload resolution unless numeric_limits<Rep>::is_signed is true.
2 Returns: If d >= d.zero(), return d, otherwise return -d.
I maybe have two suggestions :
In general , assert fail only happen in Debug version , so if you
just want to build successfully, you can build to release version to
avoid the problem.
You can confirm your Debian and Windows time zone in proper zone .
I have been using C++ for quite some time by now and literally took things for granted.
Recently, I asked myself how can the compiler return accurate values{always} when I use out of range values for calculation.
I understand the 2^n{n = bits} concept.
For example: If I would like to add two int's which are out of range such as:
10e6, I would expect the compiler to return a result that is wrong as the bits are overwritten and ultimately represent a wrong integer. But this is never seen to happen.
Can anyone shed some light over this.
Thanks.
I'm porting an application to Visual Studio 2012 (in C++11), which currently compiles and runs correctly on gcc on several platforms.
When I tried to run it, something odd (at least to me) happened. After some moments I isolated the problem, and came up to this:
// This one succeeds
assert(jumper.fst == (left_prog->buffer - program->buffer));
// This one fails!
assert(left_prog->buffer == (jumper.fst + program->buffer));
The program->buffer and left_prog->buffer are of type Instruction *, and jumper.fst has type ptrdiff_t.
It wouldn't return to the original value; upon some testing, I realized that this had something to do with the aligment. My Program type is aligned (I've written my own allocator), and this was causing the problem. After I multiplied the aligment by sizeof(Instruction), it worked as expected.
Could someone please explain to me why this is happening? Is this behaviour documented somewhere? Am I using undefined behaviour here?
(As I said, I already fixed the code by changing the aligment value, making it a multiple of sizeof(Instruction), I just want to know why this happened.)
An efficient way to create random numbers in C/C++ is rand() function. but I've seen code like this to create random variables:
int x; x%=100;
Is this a good way to produce a simple random number? If your answer is no, please tell me why?
EDIT :
well the actual code is here:
int temp1,temp2;
A=(abs(temp1))%11-1;
B=(abs(temp2))%11-1; //Randomize without using rand()
A friend of mine wrote this code. I tried to compile it and I got uninitialized local variable 'temp1' used error (on MSVS). He wrote this code in 2011 and it worked on his Linux with latest version of GCC.
It's rare to see something worse.
You have undefined behaviour as you are using an uninitialised variable.
And using modulus introduces statistical bias.
Your friend has misunderstood uninitialised variables.
Ignoring for a moment that reading from them is undefined and can thus do just about anything, if you pretend that they will safely yield an arbitrary value then you need to remember that arbitrary does not mean random.
This approach has somewhere between minimal and no random distribution of values. So you won't be able to predict what you get back, but that does not make it usefully "random" in any meaningful sense.
Also, applying % ruins any distribution so don't do that either.
Tell your friend to turn warnings on in his compilation settings. His GCC is trying to tell him all this, but he isn't listening.
I have a program that behaves weirdly and probably has undefined behaviour. Sometimes, the return address of a function seems to be changed, and I don't know what's causing it.
The return address is always changed to the same address, an assertion inside a function the control shouldn't be able to reach. I've been able to stop the program with a debugger to see that when it's supposed to execute a return statement, it jumps straight to the line with the assertion instead.
This code approximates how my function works.
int foo(Vector t)
double sum = 0;
for(unsgined int i=0; i<t.size();++i){
sum += t[i];
}
double limit = bar(); // bar returns a value between 0 and 1
double a=0;
for(double i=0; i<10; i++){
a += f(i)/sum; // f(1)/sum + ... + f(10)/sum = 1.0f
if(a>3)return a;
}
//shoudn'get here
assert(false); // ... then this line is executed
}
This is what I've tried so far:
Switching all std::vector [] operators with .at to prevent accidentily writing into memory
Made sure all return-by-value values are const.
Switched on -Wall and -Werror and -pedantic-errors in gcc
Ran the program with valgrind
I get a couple of invalid read of size 8, but they seem to originate from qt, so I'm not sure what to make of it. Could this be the problem?
The error happens only occasionally when I have run the program for a while and give it certain input values, and more often in a release build than in a debug build.
EDIT:
So I managed to reproduce the problem in a console application (no qt loaded) I then manages to simulate events that caused the problem.
Like some of you suggested, it turns out I misjudged what was actually causing it to reach the assertion, probably due to my lack of experience with qt's debugger. The actual problem was a floating point error in the double i used as a loop condition.
I was implementing softmax, but exp(x) got rounded to zero with particular inputs.
Now, as I have solved the problem, I might rephrase it. Is there a method for checking problems like rounding errors automatically. I.e breaking on 0/0 for instance?
The short answer is:
The most portable way of determining if a floating-point exceptional condition has occurred is to use the floating-point exception facilities provided by C in fenv.h.
although, unfortunately, this is far from being perfect.
I suggest you to read both
https://www.securecoding.cert.org/confluence/display/seccode/FLP04-C.+Check+floating-point+inputs+for+exceptional+values
and
https://www.securecoding.cert.org/confluence/display/seccode/FLP03-C.+Detect+and+handle+floating-point+errors
which concisely address the exact question you are posing:
Is there a method for checking problems like rounding errors automatically.