I've got a bit of a problem with debugging a C++ program using GDB.
When I use print object.member, it doesn't always print the value of the variable correctly. Instead, it prints the value of one of the arguments to the function I'm debugging. And it doesn't change through the function, although I change the value of object.member throughout.
And the thing is, the program is rather large and consists of several modules, with partially specialised templates and such, so I can't post it all here.
Now I tried to create a minimal testcase, but whatever simple I tried, I can't make it work. I mean, not work.
So all I can ask is, has anybody ever seen this behaviour in GDB, and have you found out what caused it and how to solve it?
There are question here about similar behaviour, but those amount to the program not being compiled properly (optimisation levels too high etc). I compiled it with -Wall -Wextra -pedantic -g -O0, so that can't be it.
And the program runs fine; I can cout << object.member; and that outputs the expected value, so I don't know what to try now.
I've seen similar behaviour before. Unfortunately, gdb is really 'C' based so although it will deal with C++, I've found it occasionally to be quite picky about displaying values.
When displaying more complex items (such as maps, strings or the dereferenced contents of smart pointers) you have to sometimes be quite explicit about dereferencing and casting variables.
Another possibility is the function itself - anything unusual about it? Is it templated for example?
Can you create a reference to this variable in your code and try displaying that? Or take the address of the variable and derefrence the contents - only if it's publicly available of course.
Naturally the source code must match what you've compiled so must be older than the exe but gdb will normally warn you about such things
Related
So I went through this video - https://youtu.be/e4ax90XmUBc
Now, my doubt is that if C++ is compiled language, that is, it goes through the entire code and translates it, then if I do something like
void main() {
int a;
cout<<"This is a number = "<<a; //This will give an error (Why?)
a = 10;
}
Now, answer for this would be that I have not defined the value for a, which I learned in school. But if a compiler goes through the entire code and then translates it then I think it shouldn't give any error.
But by giving an error like this, it looks to me as if C++ is a interpreted language.
Can anyone put some light on this and help me solve my dilemma here?
Technically, the C++ standard doesn't mandate that the compiler has to compile C++ into machine code. As an example LLVM Clang first compiles it to IR (Intermediate Representation) and only then to machine code.
Similarly, a compiler could embed a copy of itself in a program that it compiles and then, when the program is executed compile the program, immediately invoke it and delete the executable afterwards which in practice would be very similar to the program being interpreted. In practice, all widely used C++ compilers parse and assemble programs beforehand.
Regarding your example, the statement "This will give an error" is a bit ambiguous. I'm not sure if you're saying that you're getting a compile-time error or a runtime error. As such, I will discuss both possibilities.
If you're getting a compile time error, then your compiler has noticed that your program has undefined behaviour. This is something that you always want to avoid (in some cases, such as when your application operates outside the scope of the C++ Standard, such as when interfacing with certain hardware, UB occurs by definition, as certain behaviour is not defined by the Standard). This is a simple form of static analysis. The Standard doesn't mandate the your compiler informs you of this error and it would usually be a runtime error, but your compiler informed you anyway because it noticed that you probably made a mistake. For example on g++ such behaviour could be achieved by using the -Wall -Werror flags.
In the case of the error being a runtime error then you're most likely seeing a message like "Memory Access Violation" (on Windows) or "Signal 11" (on Linux). This is due to the fact that your program accessed uninitialized memory which is Undefined Behaviour.
In practice, you wouldn't most likely get any error at all at runtime. Unless the compiler has embedded dynamic checks in your program, it would just silently print a (seemingly) random value and continue. The value comes from uninitialized memory.
Side note: main returns int rather than void. Also using namespace std; considered harmful.
I've found some code like (many problems in the following code):
//setup consistent in each of the bad code examples
string someString;
char* nullValue = getenv("NONEXISTENT"); // some non-existent environment variable
// bad code example 1:
char x[1024];
sprintf(x," some text%s ", nullValue); //crashes on solaris, what about linux?
// bad code example 2:
someString += nullValue; // What happens here?
//bad code example 3:
someString.append(nullValue); // What happens here?
//bad code example 4:
string nextString=string(nullValue); //What happens here?
cout<<nextString;
We're using solaris, linux, gcc, sunstudio, and will quite possibly use clang++ in the future. Is the behaviour of this code consistent across platform and compiler? I couldn't find specs that describe expected behaviour in all the cases of the above code.
At present, we have problems running our code using gcc (and on linux), is the above code a likely cause?
If the code above acts the same in all of these environments, that's valuable information (even if the behavior is a crash) for me because I will know that this isn't the reason for our linux problems.
In general these uses of NULL, where a valid C string is expected, cause undefined behavior, which means that anything can happen.
Some platforms try to have defined behavior for this. IIRC there are platforms that deal gracefully with passing a NULL pointer to printf family functions for a %s format substitution (printing something like "(null)"). Other than that, some platforms try to ensure a reproducible crash (e.g. a fatal signal) for such cases. But you can't rely on this in general.
If you have problems in that area of the code: yes, this is a likely cause or may obscure other causes, so: fix it, it's broken!
There is a problem constructing strings from the pointer, without checking the return value first. The getenv definition says:
Retrieves a C string containing the value of the environment variable whose name is specified as argument. If the requested variable is not part of the environment list, the function returns a null pointer.
Creating a std::string from a null pointer is explicitly not allowed by C++ standard. The same goes for appending to the string (+=).
I'm no C expert, but have a hunch that using a null pointer with sprintf is not allowed either.
Exactly what happens when you use a NULL pointer in any of the cases you have described is "undefined behaviour". Some C libraries do recognise NULL for strings in printf, and will print "(null)" or something along those lines, but I would definitely not rely on that. Similarly, your other usages of NULL are "undefined", which means they are guaranteed to not work in any particular way across a range of platforms. What happens on one platform may well be completely different to what happens on another platform (or with another brand/version of the compiler, or with different compiler optimisation settings, or which way the wind blows that day if you are unlucky). In this case, it's likely that it leads to either a crash or "well behaved code" - it depends on who wrote the C/C++ library.
One solution, if you have a few of these things is to create a "getenv_safe" that instead of returning NULL returns an empty string [or "not set" or similar] if the environment variable isn't set, and then either fix the code directly, or #define getenv(x) getenv_safe(x).
I am having a weird optimisation-only bug so I am trying to determine which flag is causing it. The error (incorrect computation) occurs with -O1, but not with -O0. Therefore, I thought I could use all of the -f flags that -O1 includes to narrow down the culprit. However, when I try that (using this list http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html), it works fine again!
Can anyone explain this, or give other suggestions of what to look for? I've run the code through valgrind, and it does not report any errors.
EDIT
I found that the computation is correct with -O0, incorrect with -O1, but correct again with -O1 -ffloat-store. Any thoughts of what to look for that would cause it not to work without -ffloat-store?
EDIT2
If I compile with my normal release flags, there is a computation error. However, if I add either:
-ffloat-store
or
-mpc64
to the list of flags, the error goes away.
Can anyone suggest a way to track down the line at which this flag is making a difference so I could potentially change it instead of requiring everyone using the code to compile with an additional flag?
From back in my GCC/C++ days the optimization bug like this that I remember was that with -O0 on methods where no return value was specified would return the last value of that type in the method (probably what you wanted to return right?) whereas with optimisations on it returned the default value for the type, not the last value of the type in the method (this might only be true for value types, I can't remember). This would mean you would develop for ages with the debug flags on and everything would look fine, then it would stop working when you optimised.
For me not specifying a return value is a compilation error, but that was C++ back then.
The solution to this was to switch on the strongest set of warnings and then to treat all warnings as errors: that will highlight things like this. (If you are not already doing this then you are in for a whole load of pain!)
If you already have all of the errors / warnings on then the only other option is that a method call with side-effects is being optimised out. That is going to be harder to track down.
If I write a program like the following one, g++ and visual studio have the courtesy of warning me that the local variable a is never used :
int main()
{
int a; // An unused variable? Warning! Warning!
}
If I remove the unused variable (to make the compiler happy), it leaves me with the following program :
int main()
{
// An empty main? That's fine.
}
Now, I am left with a useless program.
Maybe I am missing something, but, if an unused variable is bad enough to raise a warning, why would an empty program be ok?
The example above is pretty simple. But in real life, if I have a big program with an empty main (because I forgot to put anything in it). Then having a warning should be a good thing, isn't it.
Maybe I am missing an option in g++ or visual studio that can raise a warning/error when the main is empty?
The reason for this is simple, if there is no return statement in main it implicitly returns EXIT_SUCCESS, as defined by the standard.
So an empty main is fine, no return needed, no function calls needed, nothing.
To answer the question why GCC doesn't warn you is because warnings are there to help you with common mistakes. Leaving a variable unused can lead to confusing errors, and code bloat.
However forgetting entirely to write a main function isn't a common mistake by anything but a beginner and isn't worth warning about (because it's entirely legal as well).
I suspect a lot of it is that compilers generally try to warn about things that are potential problems, but aren't necessarily apparent.
Now it's certainly true that if all your main contains a definition of a variable that's never used, that's fairly apparent -- but if you've defined 16 variables (or whatever) and one of them is no longer used, that may not be so obvious.
In the case of main containing nothing, I suppose the same could happen with an empty main -- for example, you could have a whole web of #ifdef/#elif/etc., that led to main being entirely empty for some particular platform. I'm pretty sure I've never run across this though, and I'm pretty sure I've never heard of anybody else seeing it either. At least to me, that suggests that it probably doesn't arise often enough in practice for most people to care much about the possibility.
if an unused variable is bad enough to raise a warning, why would an empty program be ok?
First of all, an empty main does not equal an empty program. There could be static objects with non-trivial constructors/destructors. These would get invoked irrespective of whether main is empty.
Secondly, one could think of lots and lots of potential errors that a compiler could warn about, but most compilers don't. I think this particular one doesn't come up very often (and takes seconds to figure out). I therefore don't see a compelling case for specifically diagnosing it.
When I was cleaning up inherited C code that comprised the customized runner for Informix 4GL, I fixed every warning having set the warning flag to catch everything, and there were lots of warnings.
I haven't used Visual C++ in a long time. Can't VC++ be configured to flag the most severe warnings? It is probably not the default setting, but one you have to change.
It is possible then that at least the unused variable would be flagged.
In a global sense int main() is just a definition of the main function of the program which returns SUCCESS when finishes.
The main function is the point by where all C++ programs start their execution, independently of its location within the source code.
So this:
int main()
{
// An empty main? That's fine.
// notice that the "return 0;" part is here by default, whether you wrote it or not
}
is just a definition of a function which returns admissible value.
So everything is ok, that's why the compiler is silent.
I am using g++ on Ubuntu 10.10(64-bit) if the OS is at all relevant for the matter.
I saw something strange so i decided to check and for some reason this code
#include <iostream>
int main()
{
int a;
std::cout << a << std::endl;
return 0;
}
always prints 0. Obviously g++ does auto initialization of uninitialized variables to their corresponding null-value. The thing is I want to turn that feature off, or at least make g++ show warning about using uninitialized variables, since this way my code won't work well when compiled on VS for instance. Besides I'm pretty sure the C++ standard states that a variable which isn't implicitly initialized with some value has an undefined value amongst all it's possible values, which should in fact be different with every execution of the program, since different parts of the operating memory are used every time it's executed.
Explicit question: Is there a way to make g++ show warnings for uninitialized variables?
GCC does not initialize uninitialized variables to 0. It's just a case that a is 0.
If what you want to do is to receive warnings when you use uninitialized variables you could use GCC option -Wuninitialized (also included by -Wall).
However it can't statically spot any possible usage of uninitialized variables: you'll need some run time tools to spot that, and there's valgrind for this.
You might also try to use a tool like cppcheck. In general, in well written C++ there is rarely a reason to declare a variable without initializing it.