C++ variable address does not match - c++

class CCtrl
{
...Other Members...
RankCache m_stRankCache;
uint32 m_uSyncListTime;
};
int CCtrl::UpdateList()
{
uint32 tNow = GetNowTime();
for (uint8 i = 0; i < uRankListNum; i++)
{
m_stRankCache.Append(i);
}
m_uSyncListTime = tNow;
return 0;
}
Here are two weired things:
When step into Append(), p this = 0x7f3f467edfdc, but in UpdateGuildList(), p &m_stRankCache = 0x7f3f067edfdc, these two pointers are different.
tNow = 1418916316, after executing m_uSyncListTime = tNow, m_uSyncListTime is still 0.
How could this happen? I've used a whole day for debugging. And I checked my code there is no pack(1) and pack() mismatch.

The issue is more than likely that you're using your debugger to debug code that has been optimized. As your comment suggested, you are debugging code that has been compiled with the -O3 flag, denoting optimization.
Even though you're using gdb, the Visual Studio and other debuggers also have the same issue, and that issue is debugging optimized code and having the debugger "work" in the sense that the debugger follows along with the lines in the source code, along with the variables that have been declared.
A debugger assumes that the lines of the source code match up with the generated assembly code. With optimizations turned on, this can no longer be the case. Code and variables are eliminated, moved, etc. Therefore the lines in the code (including variable declarations) you believe should be there at a certain location may not be there in the final optimized build.
The debugger cannot discern these changes, thus you get erroneous values used for variables, or in some cases, you get "variable doesn't exist" errors reported by your debugger.
Also, it may also serve as a good check to do a simple cout or log of the values in question if there is a problem with the debugging environment. There are situations where even debuggers may get things wrong, so a backup verification system (i.e. logging, printf() or cout statements, etc.) should be used.

Related

assert()s, optimisation and an assume() directive in D

Say I have an assert() something like
assert( x < limit );
I took a look at the behaviour of the optimiser in GDC in release and debug builds with the following snippet of code:
uint cxx1( uint x )
{
assert( x < 10 );
return x % 10;
}
uint cxx1a( uint x )
in { assert( x < 10 ); }
body
{
return x % 10;
}
uint cxx2( uint x )
{
if ( !( x < 10 ))
assert(0);
return x % 10;
}
Now when I build in debug mode, the asserts have the very pleasing effect of triggering huge optimisation. GDC gets rid of the horrid code to do the modulo operation entirely, because of its knowledge about the possible range of x due to the assert’s if-condition. But in release mode, the if-condition is discarded, so all of a sudden, the horrid code comes back, and there is no longer any optimisation in cxx1() nor even in cxx1a(). This is very ironic, that release mode generates far worse code than debug code. Of course, no-one wants executable code belonging to the if-tests to be present in release code as we must lose all that overhead.
Now ideally, I would want to express the condition in the sense of communicating information to the compiler, regardless of release / debug builds, about conditions that may always be assumed to be true, and so such assumptions can guide optimisation in very powerful ways.
I believe some C++ compilers have something called __assume() or some such, but memory fails me here. GCC has a __builtin_unreachable() special directive which might be useable to build an assume() feature. Basically if I could build my own assume() directive it would have the effect of asserting certain truths about known values or known ranges and exposing / publishing these to optimisation passes regardless of release / debug mode but without generating any actual code at all for the assume() condition in a release build, while in debug mode it would be exactly the same as assert().
I tried an experiment which you see in cxx2 which triggers optimisation always, so good job there, but it generates what is morally debug code for the assume()'s if-condition even in release mode with a test and a conditional jump to an undefined instruction in order to halt the process.
Does anyone have any ideas about whether this is solvable? Or do you think this is a useful D compiler fantasy wish-list item?
As far as I know __builtin_unreachable is the next best replacement for an assume like function in GCC. In some cases the if condition might still not get optimized out though: "Assume" clause in gcc
The GCC builtins are available in GDC by importing gcc.builtins. Here's an example how to wrap the __builtin_unreachable function:
import gcc.builtins;
void assume()(bool condition)
{
if (!condition)
__builtin_unreachable();
}
bool foo(int a)
{
assume(a > 10);
return a > 10;
}
There are two interesting details here:
We don't need string mixins or similarily complicated stuff. As long as you compile with -O GDC will completely optimize the function call anyway.
For this to work the assume function must get inlined. Unfortunately inlining normal functions is not completely supported when assume is in a different module as the calling function. As a workaround we use a template with 0 template arguments. This should make sure inlining can always work.
You can test and modify this example here:
explore.dgnu.org
Now we (GDC developers) could easily rewrite assert(...) to if(...) __builtin_unreachable() in release mode. But this could break some code so dmd should implement this first.
OK, I really dont know what you want? cxx2 is solution
some more info

Meaning of weird side-effectless statement inside if

I was browsing through a project and came across this:
if(!StaticAnimatedEntities)
int esko = esko = 2;
(The type of StaticAnimatedEntities here is a plain unsigned short.)
It struck me as very odd so I grepped the project for esko and found other similar ifs with nothing but that line inside them, f.ex. this:
if(ItemIDMap.find(ID) != ItemIDMap.end())
int esko = esko = 2;
(Note that there are no other variables named esko outside those ifs.)
What is the meaning of this cryptic piece of code?
You can sometimes see code like this just to serve as an anchor location to put a breakpoint on in an interactive debugger in order to trap some "unusual" (most often - erroneous) conditions.
Debugger-provided conditional breakpoints are usually very slow and/or primitive, so people often deliberately plan ahead and provide such conditional branches in order to create a compiled-in location for a breakpoint. Such a compiled-in conditional breakpoint does not slow down program execution nearly as much as a debugger-provided conditional breakpoint would.
In many cases such code is surrounded by #ifndef NDEBUG/#endif to prevent it from getting into the production builds. In other cases people just leave it unprotected, believing that optimizing compiler will remove it anyway.
In order for it to work the code under if should generate some machine code in debug builds. Otherwise, it would be impossible to put a breakpoint on it. Different people have different preferences in that regard, but the code virtually always looks weird and meaningless.
It is the meaninglessness of that code that provides programmers with full freedom to write anything they want there. And I'd say that it often becomes a part of each programmer's signature style, a fingerprint of sorts. The guy in question does int esko = esko = 2;, apparently.

Are comparison between macro values bad in embedded programming?

I am building a program that needs to run on an ARM.
The processor has plenty of resources to run the program, so this question is not directly related to this type of processor, but is related to non powerful ones, where resources and computing power are 'limited'.
To print debug informations (or even to activate portions of code) I am using a header file where I define macros that I set to true or false, like this:
#define DEBUG_ADCS_OBC true
and in the main program:
if (DEBUG_ADCS_OBC == true) {
printf("O2A ");
for (j = 0; j < 50; j++) {
printf("%x ", buffer_obc[jj]);
}
}
Is this a bad habit? Are there better ways to do this?
In addition, will having these IF checks affect performances in a measurable way?
Or is it safe to assume that when the code is compiled the IFs are somehow removed from the flow, as the comparison is made between two values that cannot change?
Since the expression DEBUG_ADCS_OBC == true can be evaluated at compile time, optimizing compilers will figure out that the branch is either always taken or is always bypassed, and eliminate the condition altogether. Therefore, there is zero runtime cost to the expression when you use an optimized compiler.
If you are compiling with all optimization turned off, use conditional compilation instead. This will do the same thing an optimizing compiler does with a constant expression, but at the preprocessor stage. Hence the compiler will not "see" the conditional even with optimization turned off.
Note 1: Since DEBUG_ADCS_OBC has a meaning of boolean variable, use DEBUG_ADCS_OBC without == true for a somewhat cleaner look.
Note 2: Rather than defining the value in the body of your program, consider passing a value on the command line, for example -DDEBUG_ADCS_OBC=true. This lets you change the debug setting without modifying your source code, simply by manipulating the make file or one of its options.
The code you are using is evaluated everytime when your program reaches this line. Since every change of DEBUG_ADCS_OBC will require a recompile of your code, you should use #ifdef/#ifndef expressions instead. The advantage of them is, that they are only evaluated once at compile time.
Your code segment could look like the following:
Header:
//Remove this line if debugging should be disabled
#define DEBUG_DCS_OBS
Source:
#ifdef DEBUG_DCS_OBS
printf("O2A ");
for (j = 0; j < 50; j++) {
printf("%x ", buffer_obc[jj]);
}
#endif
The problem with getting the compiler to do this is the unnecessary run-time test of a constant expression. An optimising compiler will remove this, but equally it may issue warnings about constant expressions or when the macro is undefined, issue warnings about unreachable code.
It is not a matter of "bad in embedded programming", it bears little merit in any programming domain.
The following is the more usual idiom, will not include unreachable code in the final build and in an appropriately configured a syntax highlighting editor or IDE will generally show you which code sections are active and which are not.
#define DEBUG_ADCS_OBC
...
#if defined DEBUG_ADCS_OBC
printf("O2A ");
for (j = 0; j < 50; j++)
{
printf("%x ", buffer_obc[jj]);
}
#endif
I'll add one thing that didn't see being mentioned.
If optimizations are disabled on debug builds, and even if runtime performance impact is insignificant, code is still included. As a result debug builds are usually bigger than release builds.
If you have very limited memory, you can run into situation where release build fits in the device memory and debug build does not.
For this reason I prefer compile time #if over runtime if. I can keep the memory usage between debug and release builds closer to each other, and it's easier to keep using the debugger at the end of project.
The optimizer will solve the extra resources problem as mentioned in the other replies, but I want to add another point. From the code readability point of view this code will be repeated a lot of times, so you can consider creating your specific printing macros. Those macros is what should be enclosed by the debug enable or disable macros.
#ifdef DEBUG_DCS_OBS
myCustomPrint //your custom printing code
#else
myCustomPrint //No code here
#end
Also this will decrease the probability of the macro to be forgotten in any file which will cause a real optimization problem.

gdb - re-setting a const

I have
const int MAX_CONNECTIONS = 500;
//...
if(clients.size() < MAX_CONNECTIONS) {
//...
}
I'm trying to find the "right" choice for MAX_CONNECTIONS. So I fire up gdb and set MAX_CONNECTIONS = 750. But it seems my code isn't responding to this change. I wonder if it's because the const int was resolved at compile time even though it wound up getting bumped at runtime. Does this sound right, and, using GDB is there any way I can bypass this effect without having to edit the code in my program? It takes a while just to warm up to 500.
I suspect that the compiler, seeing that the variable is const, is inlining the constant into the assembly and not having the generated code actually read the value of the MAX_CONNECTIONS variable. The C++ spec is worded in a way where if a variable of primitive type is explicitly marked const, the compiler can make certain assumptions about it for the purposes of optimization, since any attempt to change that constant is either (1) illegal or (2) results in undefined behavior.
If you want to use GDB to do things like this, consider marking the variable volatile rather than const to indicate to the compiler that it shouldn't optimize it. Alternatively, have this information controlled by some other data source (say, a configuration option inside a file) so that you aren't blasting the program's memory out from underneath it in order to change the value.
Hope this helps!
By telling it it's const, you're telling the compiler it has freedom to not load the value, but to build it directly into the code when possible. An allocated copy may still exist for those times when the particular instructions chosen need to load a value rather than having an immediate value, or it could be omitted by the compiler as well. That's a bit of a loose answer short on standardese, but that's the basic idea.
As this post is quite old, my answer is more like a reference to my future self. Assuming you compiled in debug mode, running the following expression in the debugger (lldb in my case) works:
const_cast<int&>(MAX_CONNECTIONS) = 750
In case you have to change the constant often, e.g. in a loop, set a breakpoint and evaluate the expression each time the breakpoint is hit
breakpoint set <location>
breakpoint command add <breakpoint_id>
const_cast<int&>(MAX_CONNECTIONS) = 750
DONE

Why is passing a char* to this method failing?

I have a C++ method such as:
bool MyClass::Foo(char* charPointer)
{
return CallExternalAPIFunction(charPointer);
}
Now I have some static method somewhere else such as:
bool MyOtherClass::DoFoo(char* charPointer)
{
return _myClassObject.Foo(charPointer);
}
My issue is that my code breaks at that point. It doesn't exit the application or anything, it just never returns any value. To try and pinpoint the issue, I stepped through the code using the Visual Studio 2010 debugger and noticed something weird.
When I step into the DoFoo function and hover over charPointer, I actually see the value it was called with (an IP address string in this case). However, when I step into Foo and hover over charPointer, nothing shows up and the external API function call never returns (it's like it's just stepped over) and my program resumes it's execution after the call to DoFoo.
I also tried using the Exception... feature of the VS debugger (to pick up first chance exceptions) but it never picked up anything.
Has this ever happened to anyone? Am I doing something wrong?
Thank you.
You need to build the project with Debug settings. Release settings mean that optimizations are enabled and optimizations make debugging a beating.
Without optimizations, there is a very close correspondence between statements in your C++ code and blocks of machine code in the program. The program is slower (often far slower) but it's easier to debug because you can observe what each statement does.
The optimizer reorders your code, eliminates variables, inlines functions, unrolls loops, and does all sorts of other things to make the program fast. The program is faster (often much faster) but it's far more difficult to debug because the correspondence between the statements in your C++ code and the instructions in the machine code is no longer there.