I have
const int MAX_CONNECTIONS = 500;
//...
if(clients.size() < MAX_CONNECTIONS) {
//...
}
I'm trying to find the "right" choice for MAX_CONNECTIONS. So I fire up gdb and set MAX_CONNECTIONS = 750. But it seems my code isn't responding to this change. I wonder if it's because the const int was resolved at compile time even though it wound up getting bumped at runtime. Does this sound right, and, using GDB is there any way I can bypass this effect without having to edit the code in my program? It takes a while just to warm up to 500.
I suspect that the compiler, seeing that the variable is const, is inlining the constant into the assembly and not having the generated code actually read the value of the MAX_CONNECTIONS variable. The C++ spec is worded in a way where if a variable of primitive type is explicitly marked const, the compiler can make certain assumptions about it for the purposes of optimization, since any attempt to change that constant is either (1) illegal or (2) results in undefined behavior.
If you want to use GDB to do things like this, consider marking the variable volatile rather than const to indicate to the compiler that it shouldn't optimize it. Alternatively, have this information controlled by some other data source (say, a configuration option inside a file) so that you aren't blasting the program's memory out from underneath it in order to change the value.
Hope this helps!
By telling it it's const, you're telling the compiler it has freedom to not load the value, but to build it directly into the code when possible. An allocated copy may still exist for those times when the particular instructions chosen need to load a value rather than having an immediate value, or it could be omitted by the compiler as well. That's a bit of a loose answer short on standardese, but that's the basic idea.
As this post is quite old, my answer is more like a reference to my future self. Assuming you compiled in debug mode, running the following expression in the debugger (lldb in my case) works:
const_cast<int&>(MAX_CONNECTIONS) = 750
In case you have to change the constant often, e.g. in a loop, set a breakpoint and evaluate the expression each time the breakpoint is hit
breakpoint set <location>
breakpoint command add <breakpoint_id>
const_cast<int&>(MAX_CONNECTIONS) = 750
DONE
Related
I'm struggling with a non-sensical if statement...
Consider this code in a C++ file
if (coreAudioDevice) {
delete coreAudioDevice;
coreAudioDevice = nullptr;
}
coreAudioDevice = AudioDevice::GetDevice(defaultOutputDeviceID, false, coreAudioDevice, true);
if (coreAudioDevice)
{
coreAudioDevice->setDefaultDevice(true);
// we use the quick mode which skips initialisation; cache the device name (in AudioDevice)
// using an immediate, blocking look-up.
char devName[256];
coreAudioDevice->GetName(devName, 256);
AUDINFO ("Using default output device %p #%d=\"%s\".\n",
defaultOutputDeviceID, coreAudioDevice, coreAudioDevice->GetName());
}
else
AUDERR ("Failed to obtain a handle on the default device (%p)\n", coreAudioDevice);
calling a function in an ObjC++ file:
AudioDevice *AudioDevice::GetDevice(AudioObjectID devId, bool forInput, AudioDevice *dev, bool quick)
{
if (dev) {
if (dev->ID() != devId) {
delete dev;
} else {
return nullptr;
}
}
dev = new AudioDevice(devId, quick, forInput);
return dev;
}
Which leads to the following terminal output:
ERROR coreaudio.cc:232 [init]: Failed to obtain a handle on the default device (0x7f81a1f1f1b0)
Evidently the if shouldn't fail because coreAudioDevice supposedly is NULL and then print a non-null value for this variable in the else branch.
I tried different compiler options and a different compiler (clang 4.0.1 vs. 5.0.1), apparently there is really something fishy in my code. Any thoughts?
Reaching the end of the function without returning a value is undefined behavior in C++.
See http://en.cppreference.com/w/cpp/language/ub and What are all the common undefined behaviours that a C++ programmer should know about?.
So the call setDefaultDevice() can legally result in anything. The compiler is free to compile the program into an executable that can do anything, when the program's control flow leads to undefined behavior (i.e. the call to setDefaultDevice()).
In this case, entering the if block with coreAudioDevice non-zero leads to UB. So the optimizing compiler foresees this and chooses to then make it go into the else branch instead. Like this it can remove the first branch and the if entirely, to produce more optimized code.
See https://blogs.msdn.microsoft.com/oldnewthing/20140627-00/?p=633
Without optimizations the program should normally run as expected.
Well, at least I found a reason, but no understanding (yet).
I had defined this method, without noticing the compiler warning (amidst a bunch of deprecation warnings printed multiple times because of concurrent compilation...):
bool setDefaultDevice(bool isDefault)
{
mDefaultDevice = isDefault;
}
Indeed, no return value.
Notice that I call this method inside the skipped if block - so theoretically I never got the chance to do that. BTW, it's what led me to discover this strange issue.
The issue goes away when I remove the call or when I make the method void as intended.
I think this also explains the very strange way of crashing I've seen: somehow the optimiser gets completely confused because of this. I'm tempted to call this a compiler bug; I don't use the return value from the method, so flow shouldn't be affected at all IMHO.
Ah, right. Should I read that as "free to build an exec that can do anything EXCEPT the sensical thing"? If so, that former boss of mine had a point banning C++ as an anomaly (the exact word was French, "saleté")...
Anyway, I can understand why the behaviour would be undefined when you don't know a function doesn't actually put a return value on the stack. You'd be popping bytes off the stack after the return, messing things up. (Read heap for stack if necessary =] ). I'm guessing that's what would happen when you run this kind of code without optimisation, in good ole C or with the bugggy method defined out of line (so the optimiser cannot know that it's buggy).
But once you know that a function doesn't actually return a value and you see that the value wouldn't be used anyway, you should be able to emit code that doesn't pop the corresponding number of bytes. IOW, code that behaves as expected. With a big fat compile-time warning. Presuming the standard allows this that'd be the sensical thing to do, rather than optimise the entire tainted block away because that'd be faster. If that's indeed the reasoning followed in clang it doesn't actually inspire confidence...
Does the standard say this cannot be an error by default? I'd certainly prefer that over the current behaviour!
I have a C++ method such as:
bool MyClass::Foo(char* charPointer)
{
return CallExternalAPIFunction(charPointer);
}
Now I have some static method somewhere else such as:
bool MyOtherClass::DoFoo(char* charPointer)
{
return _myClassObject.Foo(charPointer);
}
My issue is that my code breaks at that point. It doesn't exit the application or anything, it just never returns any value. To try and pinpoint the issue, I stepped through the code using the Visual Studio 2010 debugger and noticed something weird.
When I step into the DoFoo function and hover over charPointer, I actually see the value it was called with (an IP address string in this case). However, when I step into Foo and hover over charPointer, nothing shows up and the external API function call never returns (it's like it's just stepped over) and my program resumes it's execution after the call to DoFoo.
I also tried using the Exception... feature of the VS debugger (to pick up first chance exceptions) but it never picked up anything.
Has this ever happened to anyone? Am I doing something wrong?
Thank you.
You need to build the project with Debug settings. Release settings mean that optimizations are enabled and optimizations make debugging a beating.
Without optimizations, there is a very close correspondence between statements in your C++ code and blocks of machine code in the program. The program is slower (often far slower) but it's easier to debug because you can observe what each statement does.
The optimizer reorders your code, eliminates variables, inlines functions, unrolls loops, and does all sorts of other things to make the program fast. The program is faster (often much faster) but it's far more difficult to debug because the correspondence between the statements in your C++ code and the instructions in the machine code is no longer there.
I have a function in my program that preforms a whole bunch of floating point math. It returns an array of values which is not currently being used in my program yet.
I want to test this piece of code for speed under maximum optimizations, however since the code isn't used, the compiler conveniently skips the function all together and I can't get a time on it.
How do force the compiler to run that section of code under maximum optimizations even though the result is not used (I want the computer to just give me a sense as to how fast the section runs).
I'm running Visual C++ 2008.
You could use SecureZeroMemory() to overwrite the result after is has been received from the function. You don't even need to overwrite the whole result, one array element will be enough, maybe you can even pass zero as "number of bytes", so that nothing is done by the function.
This will do the trick on Windows - SecureZeroMemory() is intended to never be optimized out by the compiler. Using it is pretty straightforward and it's rather fast.
I'm sure there are many compiler tricks, but the easiest way is to just make it look like you are using the value. In this case, just pass the returned array to some other function. The other function doesn't need to do anything, but that should be enough to convince the compiler you need the results.
If you find that your empty second function is being optimized out as well, then just stick it in a shared library (DLL) and it is impossible for the compiler to know how it is being used.
How you allocate the result can also change this. If you pass the original function a pointer, you could just pass it a heap pointer. Since that pointer may be used somewhere else it is highly unlikely the compiler could optimize away the code, as it has no idea if the results will be used or not.
You could also just legitimately use the data. It makes sense to verify the results in another function. If doing performance testing just put this verification part outside of the timed section. This is generally how I do such performance tests (make sure the result is checked/used).
This is what a test case is for. Write a test case in a separate binary (even just in the main() method) which sets a throwaway local variable to the result of the function. Time using your preferred method (e.g by capturing time(NULL) from immediately before and after the assignment and printing the time difference). You should have a decent idea of running time from that.
EDIT: time(NULL) is whole-second precision = bad and evil. Use clock(), as shown here, for the most accurate precision in the C/C++ standard library.
if you are using visual studio the code down here would work, but idon't know about any other solutions for gcc
#pragma optimize( "", off )
.
.
.
#pragma optimize( "", on )
I have a piece of templated code that is never run, but is compiled. When I remove it, another part of my program breaks.
First off, I'm a bit at a loss as to how to ask this question. So I'm going to try throwing lots of information at the problem.
Ok, so, I went to completely redesign my test project for my experimental core library thingy. I use a lot of template shenanigans in the library. When I removed the "user" code, the tests gave me a memory allocation error. After quite a bit of experimenting, I narrowed it down to this bit of code (out of a couple hundred lines):
void VOODOO(components::switchBoard &board) {
board.addComponent<using_allegro::keyInputs<'w'> >();
}
Fundementally, what's weirding me out is that it appears that the act of compiling this function (and the template function it then uses, and the template functions those then use...), makes this bug not appear. This code is not being run. Similar code (the same, but for different key vals) occurs elsewhere, but is within Boost TDD code.
I realize I certainly haven't given enough information for you to solve it for me; I tried, but it more-or-less spirals into most of the code base. I think I'm most looking for "here's what the problem could be", "here's where to look", etc. There's something that's happening during compile because of this line, but I don't know enough about that step to begin looking.
Sooo, how can a (presumably) compilied, but never actually run, bit of templated code, when removed, cause another part of code to fail?
Error:
Unhandled exceptionat 0x6fe731ea (msvcr90d.dll) in Switchboard.exe:
0xC0000005: Access violation reading location 0xcdcdcdc1.
Callstack:
operator delete(void * pUser Data)
allocator< class name related to key inputs callbacks >::deallocate
vector< same class >::_Insert_n(...)
vector< " " >::insert(...)
vector<" ">::push_back(...)
It looks like maybe the vector isn't valid, because _MyFirst and similar data members are showing values of 0xcdcdcdcd in the debugger. But the vector is a member variable...
Update: The vector isn't valid because it's never made. I'm getting a channel ID value stomp, which is making me treat one type of channel as another.
Update:
Searching through with the debugger again, it appears that my method for giving each "channel" it's own, unique ID isn't giving me a unique ID:
inline static const char channel<template args>::idFunction() {
return reinterpret_cast<char>(&channel<CHANNEL_IDENTIFY>::idFunction);
};
Update2: These two are giving the same:
slaveChannel<switchboard, ALLEGRO_BITMAP*, entityInfo<ALLEGRO_BITMAP*>
slaveChannel<key<c>, char, push<char>
Sooo, having another compiled channel type changing things makes sense, because it shifts around the values of the idFunctions? But why are there two idFunctions with the same value?
you seem to be returning address of the function as a character? that looks weird. char has much smaller bit count than pointer, so it's highly possible you get same values. that could reason why changing code layout fixes/breaks your program
As a general answer (though aaa's comment alludes to this): When something like this affects whether a bug occurs, it's either because (a) you're wrong and it is being run, or (b) the way that the inclusion of that code happens to affect your code, data, and memory layout in the compiled program causes a heisenbug to change from visible to hidden.
The latter generally occurs when something involves undefined behavior. Sometimes a bogus pointer value will cause you to stomp on a bit of your code (which might or might not be important depending on the code layout), or sometimes a bogus write will stomp on a value in your data stack that might or might not be a pointer that's used later, or so forth.
As a simple example, supposing you have a stack that looks like:
float data[10];
int never_used;
int *important pointer;
And then you erroneously write
data[10] = 0;
Then, assuming that stack got allocated in linear order, you'll stomp on never_used, and the bug will be harmless. However, if you remove never_used (or change something so the compiler knows it can remove it for you -- maybe you remove a never-called function call that would use it), then it will stomp on important_pointer instead, and you'll now get a segfault when you dereference it.
It is a best practise to initialise a variable at the time of declaration.
int TMyClass::GetValue()
{
int vStatus = OK;
// A function returns a value
vStatus = DoSomeThingAndReturnErrorCode();
if(!vStatus)
//Do something
else
return(vStatus);
}
In the debug mode, a statement like this int vStatus = OK; is causing no issues during DEBUG MODE build.
The same when build in RELEASE MODE, throws a warning saying:
w8004: 'vStatus' is assigned a value that is never used.
Also, i am using the same variable further down my code with in the same function,like this if(!vStatus)and also I return the value of return(vStatus);
When I looked at the web for pointers on this debug Vs Release, compilers expect you to initialise your variable at the time of declaring it.
I am using Borland developer studio 6 with windows 2003 server.
Any pointers will help me to understand this issue.
Thanks
Raj
You initialise vStatus to OK, then you immediately assign a new value.
Instead of doing that you should initalise vStatus with a value that you're going to use.
Try doing the following instead:
int TMyClass::GetValue()
{
// A function returns a value
int vStatus = DoSomeThingAndReturnErrorCode();
if(!vStatus)
//Do something
else
return(vStatus);
}
Edit: Some clarification.
Initialising a variable, only to never use that value, and then to assign another value to the variable is inefficient. In your case, where you're just using int's it's not really a problem. However, if there's a large overhead in creating / copying / assignment for your types then the overhead can be a performance drain, especially if you do it a lot.
Basically, the compiler is trying to help you out and point out areas in your program where improvements can be made to your code
If you're wondering why there's no warning in debug mode, it's because the passes that perform dataflow analysis (which is what finds the problem) are only run as part of optimization.