Assertion Failed in C++ questions - c++

Running into an issue with some code I'm working on. This code is being run on a linux-based system and the error I receive is the following:
/root/cvswork/pci_sync_card/Code/SSBSupport/src/CRCWbHfChannel/CRCWbHfMSBSimulator.cpp:447:
virtual void CCRCWbHfMSBSimulator::Process(): Assertion 'pcBasebandOutput' failed.
I've tried stepping through this code to figure out why this is failing and I can't seem to figure it out. Unfortunately I have too many files to really share the code on here (stepping through the pcBasebandOutput assignment takes quite some time). I understand this is a more complex issue than can really be asked about. My primary questions are these:
Is my assert(pcBasebandOutput); line of code necessary? I only ask because when running this code on Visual Studio, the results from my program were desirable.
When it is evaluating my pcBasebandOutput variable, why would it evaluate it as false? Is this saying that no value is actually assigned to pcBasebandOutput? Or that a value may be assigned to it, but it is not of the right type (pointer to a struct of two variables, both of which are doubles)?
Thanks!

assert checks a logical condition. Assertation fails if the condition is false. So writing assert(cond) is logically the same as writing:
if (!cond)
{
assert(false);
}
I don't suggest you to remove assert from the code, because it is a guard telling you that something went not the way it's intended to go. And it's not a god idea just to ignore that, because it may shoot you in a leg later

Only you can know that
What is the type of pcBasebandOutput ? Maybe it is not properly initialized?
assert primary purpose is to allow your IDE to enter debuging session in the place where assert has hit. From there you can read all variables and see callstack/threads. Other solution (than using debugger) is to add lots of logging, which in threaded environments can cause problems on its own (logging is quite slow).

Related

When should I use if(!someVar) vs assert()?

I always see some sort of variation of this statement:
if(!someVar)// or whatever expression
{
someVar = new type; //or however the programmer wants to handle it
}
within code. My question is when should someone favor this method of error checking over an assert()? What are some specific examples? In my mind, assert() is probably the safer choice most of the time as you should often be asking yourself why a null or wrong value was passed to the variable in the first place. In this light then, should you ever use the if(!expr) statement?
For background I'm working specifically in C++ and with the assert.h header.
In the code sample you provided, the programmer is checking someVar, and changing it if someVar evaluates to boolean false. Within the concept of error checking, you can consider this a recoverable error (the error can be resolved by changing the value of someVar)
With an Assert, you are making the statement that someVar MUST be true, or something is wrong that you cannot recover from. Typically this is only run in debug builds, and the program will exit if the condition is false.
Well, assert will crash the program if an error occurs. However, you don't always want that. You might, for example, want to open a dialog box that tells the user about an error and gives them a chance to save their work.
A common choice for error handling is to throw an exception. Exceptions are great, because they can be caught, but if they aren't caught, they still crash the program just like an assert.

How do I make gtest not completely shutdown when it hits an assert? (not a test assert)

In the developer code, there are many places where it calls assert(xyz):
(from assert.h)
#define assert(_Expression) (void)( (!!(_Expression)) || (_wassert(_CRT_WIDE(#_Expression), _CRT_WIDE(__FILE__), __LINE__), 0) )
When I run my tests through gtest and one of these asserts fails, then my executable completely shuts down.
I want a way for gtest to just catch this assert, fail the test, and the continue execution. Is this possible?
As from google test's reference documentation
How to Write a Death Test
Google Test has the following macros to support death tests:
where statement is a statement that is expected to cause the process to die, predicate is a function or function object that evaluates an integer exit status, and regex is a regular expression that the stderr output of statement is expected to match. Note that statement can be any valid statement (including compound statement) and doesn't have to be an expression.
You can use these test macros to intercept native exit() or _exit() calls of your tested code, if these return different values from 0.
As for your comment
"What if the test itself doesn't expect it, but it happens anyway? I don't want the rest of my execution to stop. Just that test to fail, then continue on."
Sorry, you can't prevent that. That's what assert() statements are designed for, and act as a self assertion for certain functions that test the inputs or conditions they achieve.
You may try to compile your testing and under test code using the -DNDEBUG compiler option, but this will leavee you with even more obscure issues hitting undefined behavior or such.
If a test case is likely to hit an unexpected assertion, there's either something wrong with your test cases input values, or with the code tested.
So you should setup reproducible conditions, that either the test case fails with the assert (and the unit tester runnable carries on), or the whole thing blows up (exits the test runner process), which means your tested input didn't pass (and you'll need to change the testcase, or fix the the code under test).
Basically, if the code you are testing is broken, the test cannot continue.
To keep gtest from crashing, make sure the code you are testing at least compiles properly, and input it is gathering is valid.
I am saying this not to be mean, but rather out of personal experience. I use gtest and gmock for my own projects. I have been playing around with code lately that was a bit out of my league (after all, the only way to grow is to stretch beyond your perceived limits).
The code was taking data from a data file, and this was crashing my tests, not because there was anything wrong with the test, but because I wasn't doing proper error checking yet for the functions that were reading from the file, and it was throwing a wrench in things when the program was getting a string, and wanted an integer.
Believe it or not, exceptions are a GOOD thing in tests. You don't want to just ignore them and move on, you want to figure out what is causing them and make it stop. That is the entire reason for testing.

Why is passing a char* to this method failing?

I have a C++ method such as:
bool MyClass::Foo(char* charPointer)
{
return CallExternalAPIFunction(charPointer);
}
Now I have some static method somewhere else such as:
bool MyOtherClass::DoFoo(char* charPointer)
{
return _myClassObject.Foo(charPointer);
}
My issue is that my code breaks at that point. It doesn't exit the application or anything, it just never returns any value. To try and pinpoint the issue, I stepped through the code using the Visual Studio 2010 debugger and noticed something weird.
When I step into the DoFoo function and hover over charPointer, I actually see the value it was called with (an IP address string in this case). However, when I step into Foo and hover over charPointer, nothing shows up and the external API function call never returns (it's like it's just stepped over) and my program resumes it's execution after the call to DoFoo.
I also tried using the Exception... feature of the VS debugger (to pick up first chance exceptions) but it never picked up anything.
Has this ever happened to anyone? Am I doing something wrong?
Thank you.
You need to build the project with Debug settings. Release settings mean that optimizations are enabled and optimizations make debugging a beating.
Without optimizations, there is a very close correspondence between statements in your C++ code and blocks of machine code in the program. The program is slower (often far slower) but it's easier to debug because you can observe what each statement does.
The optimizer reorders your code, eliminates variables, inlines functions, unrolls loops, and does all sorts of other things to make the program fast. The program is faster (often much faster) but it's far more difficult to debug because the correspondence between the statements in your C++ code and the instructions in the machine code is no longer there.

Visual Studio 2005 C compiler problem when optimizing a switch statement

General Question which may be of interest to others:
I ran into a, what I believe, C++-compiler optimization (Visual Studio 2005) problem with a switch statement. What I'd want to know is if there is any way to satisfy my curiosity and find out what the compiler is trying to but failing to do. Is there any log I can spend some time (probably too much time) deciphering?
My specific problem for those curious enough to continue reading - I'd like to hear your thoughts on why I get problems in this specific case.
I've got a tiny program with about 500 lines of code containing a switch statement. Some of its cases contain some assignment of pointers.
double *ptx, *pty, *ptz;
double **ppt = new double*[3];
//some code initializing etc ptx, pty and ptz
ppt[0]=ptx;
ppt[1]=pty; //<----- this statement causes problems
ppt[2]=ptz;
The middle statement seems to hang the compiler. The compilation never ends. OK, I didn't wait for longer than it took to walk down the hall, talk to some people, get a cup of coffee and return to my desk, but this is a tiny program which usually compiles in less than a second. Remove a single line (the one indicated in the code above) and the problem goes away, as it also does when removing the optimization (on the whole program or using #pragma on the function).
Why does this middle line cause a problem? The compilers optimizer doesn't like pty.
There is no difference in the vectors ptx, pty, and ptz in the program. Everything I do to pty I do to ptx and ptz. I tried swapping their positions in ppt, but pty was still the line causing a problem.
I'm asking about this because I'm curious about what is happening. The code is rewritten and is working fine.
Edit:
Almost two weeks later, I check out the closest version to the code I described above and I can't edit it back to make it crash. This is really annoying, embarrassing and irritating. I'll give it another try, but if I don't get it breaking anytime soon I guess this part of the question is obsolete and I'll remove it. Really sorry for taking your time.
If you need to make this code compilable without changing it too much consider using memcpy where you assign a value to ppt[1]. This should at least compile fine.
However, you problem seems more like another part of the source code causes this behaviour.
What you can also try is to put this stuff:
ppt[0]=ptx;
ppt[1]=pty; //<----- this statement causes problems
ppt[2]=ptz;
in another function.
This should also help compiler a bit to avoid the path it is taking to compile your code.
Did you try renaming pty to something else (i.e. pt_y)? I encountered a couple of times (i.e. with a variable "rect2") the problem that some names seem to be "reserved".
It sounds like a compiler bug. Have you tried re-ordering the lines? e.g.,
ppt[1]=pty;
ppt[0]=ptx;
ppt[2]=ptz;
Also what happens if you juggle about the values that are assigned (which will introduce bugs in your code, but may indicator whether its the pointer or the array that's the issue), e.g.:
ppt[0] = pty;
ppt[1] = ptz;
ppt[2] = ptx;
(or similar).
It's probably due to your declaration of ptx, pty and ptz with them being optimised out to use the same address. Then this action is causing your compiler problems later in your code.
Try
static double *ptx;
static double *pty;
static double *ptz;

Testing for assert in the Boost Test framework

I use the Boost Test framework to unit test my C++ code and wondered if it is possible to test if a function will assert? Yes, sounds a bit strange but bear with me! Many of my functions check the input parameters upon entry, asserting if they are invalid, and it would be useful to test for this. For example:
void MyFunction(int param)
{
assert(param > 0); // param cannot be less than 1
...
}
I would like to be able to do something like this:
BOOST_CHECK_ASSERT(MyFunction(0), true);
BOOST_CHECK_ASSERT(MyFunction(-1), true);
BOOST_CHECK_ASSERT(MyFunction(1), false);
...
You can check for exceptions being thrown using Boost Test so I wondered if there was some assert magic too...
Having the same problem, I digged through the documentation (and code) and
found a "solution".
The Boost UTF uses boost::execution_monitor (in
<boost/test/execution_monitor.hpp>). This is designed with the aim to catch
everything that could happen during test execution. When an assert is found
execution_monitor intercepts it and throws boost::execution_exception. Thus,
by using BOOST_REQUIRE_THROW you may assert the failure of an assert.
so:
#include <boost/test/unit_test.hpp>
#include <boost/test/execution_monitor.hpp> // for execution_exception
BOOST_AUTO_TEST_CASE(case_1)
{
BOOST_REQUIRE_THROW(function_w_failing_assert(),
boost::execution_exception);
}
Should do the trick. (It works for me.)
However (or disclaimers):
It works for me. That is, on Windows XP, MSVC 7.1, boost 1.41.0. It might
be unsuitable or broken on your setup.
It might not be the intention of the author of Boost Test.
(although it seem to be the purpose of execution_monitor).
It will treat every form of fatal error the same way. I e it could be
that something other than your assert is failing. In this case you
could miss e g a memory corruption bug, and/or miss a failed failed assert.
It might break on future boost versions.
I expect it would fail if run in Release config, since the assert will be
disabled and the code that the assert was set to prevent will
run. Resulting in very undefined behavior.
If, in Release config for msvc, some assert-like or other fatal error
would occur anyway it would not be caught. (see execution_monitor docs).
If you use assert or not is up to you. I like them.
See:
http://www.boost.org/doc/libs/1_41_0/libs/test/doc/html/execution-monitor/reference.html#boost.execution_exception
the execution-monitor user-guide.
Also, thanks to Gennadiy Rozental (Author of Boost Test), if you happen to
read this, Great Work!!
There are two kinds of errors I like to check for: invariants and run-time errors.
Invariants are things that should always be true, no matter what. For those, I use asserts. Things like you shouldn't be passing me a zero pointer for the output buffer you're giving me. That's a bug in the code, plain and simple. In a debug build, it will assert and give me a chance to correct it. In a retail build, it will cause an access violation and generate a minidump (Windows, at least in my code) or a coredump (Mac/unix). There's no catch that I can do that makes sense to deal with dereferencing a zero pointer. On Windows catch (...) can suppress access violations and give the user a false sense of confidence that things are OK when they've already gone horribly, horribly wrong.
This is one reason why I've come to believe that catch (...) is generally a code smell in C++ and the only reasonable place where I can think of that being present is in main (or WinMain) right before you generate a core dump and politely exit the app.
Run-time errors are things like "I can't write this file because of permissions" or "I can't write this file because the disk is full". For these sorts of errors throwing an exception makes sense because the user can do something about it like change the permission on a directory, delete some files or choose an alternate location to save the file. These run-time errors are correctable by the user. A violation of an invariant can't be corrected by the user, only by a programmer. (Sometimes the two are the same, but typically they aren't.)
Your unit tests should force code to throw the run-time error exceptions that your code could generate. You might also want to force exceptions from your collaborators to ensure that your system under test is exception safe.
However, I don't believe there is value in trying to force your code to assert against invariants with unit tests.
I don't think so. You could always write your own assert which throws an exception and then use BOOST_CHECK_NOTHROW() for that exception.
I think this question, and some of replies, confuse run-time errors detection with bug detection. They also confuse intent and mechanism.
Run-time error is something that can happen in a 100% correct program. It need detection, and it needs proper reporting and handling, and it should be tested. Bugs also happen, and for programmer's convenience it's better to catch them early using precondition checks or invariant checks or random assert. But this is programmer's tool. The error message will make no sense for ordinary user, and it does not seem reasonable to test function behaviour on the data that properly written program will never pass to it.
As for intent and mechanism, it should be noted that exception is nothing magic. Some time ago, Peter Dimov said on Boost mailing list (approximately) that "exceptions are just non-local jump mechanism". And this is very true. If you have application where it's possible to continue after some internal error, without the risk that something will be corrupted before repair, you can implement custom assert that throws C++ exception. But it would not change the intent, and won't make testing for asserts much more reasonable.
At work I ran into the same problem. My solution is to use a compile flag. When my flag GROKUS_TESTABLE is on my GROKUS_ASSERT is turned into an exception and with Boost you can test code paths that throw exceptions. When GROKUS_TESTABLE is off, GROKUS_ASSERT is translated to c++ assert().
#if GROKUS_TESTABLE
#define GROKUS_ASSERT ... // exception
#define GROKUS_CHECK_THROW BOOST_CHECK_THROW
#else
#define GROKUS_ASSERT ... // assert
#define GROKUS_CHECK_THROW(statement, exception) {} // no-op
#endif
My original motivation was to aid debugging, i.e. assert() can be debugged quickly and exceptions often are harder to debug in gdb. My compile flag seems to balance debuggability and testability pretty well.
Hope this helps
Sorry, but you're attacking your problem the wrong way.
"assert" is the spawn of the devil (a.k.a. "C") and is useless with any language that has proper exceptions. It's waaaaaay better to reimplement an assert-like functionality with exceptions. This way you actually get a chance of handling errors the right way (incl proper cleanup procedures) or triggering them at will (for unit testing).
Besides, if your code ever runs in Windows, when you fail an assertion you get a useless popup offering you to debug/abort/retry. Nice for automated unit tests.
So do yourself a favor and re-code an assert function that throws exceptions. There's one here:
How can I assert() without using abort()?
Wrap it in a macro so you get _ _FILE _ _ and _ _ LINE _ _ (useful for debug) and you're done.