It is a best practise to initialise a variable at the time of declaration.
int TMyClass::GetValue()
{
int vStatus = OK;
// A function returns a value
vStatus = DoSomeThingAndReturnErrorCode();
if(!vStatus)
//Do something
else
return(vStatus);
}
In the debug mode, a statement like this int vStatus = OK; is causing no issues during DEBUG MODE build.
The same when build in RELEASE MODE, throws a warning saying:
w8004: 'vStatus' is assigned a value that is never used.
Also, i am using the same variable further down my code with in the same function,like this if(!vStatus)and also I return the value of return(vStatus);
When I looked at the web for pointers on this debug Vs Release, compilers expect you to initialise your variable at the time of declaring it.
I am using Borland developer studio 6 with windows 2003 server.
Any pointers will help me to understand this issue.
Thanks
Raj
You initialise vStatus to OK, then you immediately assign a new value.
Instead of doing that you should initalise vStatus with a value that you're going to use.
Try doing the following instead:
int TMyClass::GetValue()
{
// A function returns a value
int vStatus = DoSomeThingAndReturnErrorCode();
if(!vStatus)
//Do something
else
return(vStatus);
}
Edit: Some clarification.
Initialising a variable, only to never use that value, and then to assign another value to the variable is inefficient. In your case, where you're just using int's it's not really a problem. However, if there's a large overhead in creating / copying / assignment for your types then the overhead can be a performance drain, especially if you do it a lot.
Basically, the compiler is trying to help you out and point out areas in your program where improvements can be made to your code
If you're wondering why there's no warning in debug mode, it's because the passes that perform dataflow analysis (which is what finds the problem) are only run as part of optimization.
Related
I'm struggling with a non-sensical if statement...
Consider this code in a C++ file
if (coreAudioDevice) {
delete coreAudioDevice;
coreAudioDevice = nullptr;
}
coreAudioDevice = AudioDevice::GetDevice(defaultOutputDeviceID, false, coreAudioDevice, true);
if (coreAudioDevice)
{
coreAudioDevice->setDefaultDevice(true);
// we use the quick mode which skips initialisation; cache the device name (in AudioDevice)
// using an immediate, blocking look-up.
char devName[256];
coreAudioDevice->GetName(devName, 256);
AUDINFO ("Using default output device %p #%d=\"%s\".\n",
defaultOutputDeviceID, coreAudioDevice, coreAudioDevice->GetName());
}
else
AUDERR ("Failed to obtain a handle on the default device (%p)\n", coreAudioDevice);
calling a function in an ObjC++ file:
AudioDevice *AudioDevice::GetDevice(AudioObjectID devId, bool forInput, AudioDevice *dev, bool quick)
{
if (dev) {
if (dev->ID() != devId) {
delete dev;
} else {
return nullptr;
}
}
dev = new AudioDevice(devId, quick, forInput);
return dev;
}
Which leads to the following terminal output:
ERROR coreaudio.cc:232 [init]: Failed to obtain a handle on the default device (0x7f81a1f1f1b0)
Evidently the if shouldn't fail because coreAudioDevice supposedly is NULL and then print a non-null value for this variable in the else branch.
I tried different compiler options and a different compiler (clang 4.0.1 vs. 5.0.1), apparently there is really something fishy in my code. Any thoughts?
Reaching the end of the function without returning a value is undefined behavior in C++.
See http://en.cppreference.com/w/cpp/language/ub and What are all the common undefined behaviours that a C++ programmer should know about?.
So the call setDefaultDevice() can legally result in anything. The compiler is free to compile the program into an executable that can do anything, when the program's control flow leads to undefined behavior (i.e. the call to setDefaultDevice()).
In this case, entering the if block with coreAudioDevice non-zero leads to UB. So the optimizing compiler foresees this and chooses to then make it go into the else branch instead. Like this it can remove the first branch and the if entirely, to produce more optimized code.
See https://blogs.msdn.microsoft.com/oldnewthing/20140627-00/?p=633
Without optimizations the program should normally run as expected.
Well, at least I found a reason, but no understanding (yet).
I had defined this method, without noticing the compiler warning (amidst a bunch of deprecation warnings printed multiple times because of concurrent compilation...):
bool setDefaultDevice(bool isDefault)
{
mDefaultDevice = isDefault;
}
Indeed, no return value.
Notice that I call this method inside the skipped if block - so theoretically I never got the chance to do that. BTW, it's what led me to discover this strange issue.
The issue goes away when I remove the call or when I make the method void as intended.
I think this also explains the very strange way of crashing I've seen: somehow the optimiser gets completely confused because of this. I'm tempted to call this a compiler bug; I don't use the return value from the method, so flow shouldn't be affected at all IMHO.
Ah, right. Should I read that as "free to build an exec that can do anything EXCEPT the sensical thing"? If so, that former boss of mine had a point banning C++ as an anomaly (the exact word was French, "saleté")...
Anyway, I can understand why the behaviour would be undefined when you don't know a function doesn't actually put a return value on the stack. You'd be popping bytes off the stack after the return, messing things up. (Read heap for stack if necessary =] ). I'm guessing that's what would happen when you run this kind of code without optimisation, in good ole C or with the bugggy method defined out of line (so the optimiser cannot know that it's buggy).
But once you know that a function doesn't actually return a value and you see that the value wouldn't be used anyway, you should be able to emit code that doesn't pop the corresponding number of bytes. IOW, code that behaves as expected. With a big fat compile-time warning. Presuming the standard allows this that'd be the sensical thing to do, rather than optimise the entire tainted block away because that'd be faster. If that's indeed the reasoning followed in clang it doesn't actually inspire confidence...
Does the standard say this cannot be an error by default? I'd certainly prefer that over the current behaviour!
I have
const int MAX_CONNECTIONS = 500;
//...
if(clients.size() < MAX_CONNECTIONS) {
//...
}
I'm trying to find the "right" choice for MAX_CONNECTIONS. So I fire up gdb and set MAX_CONNECTIONS = 750. But it seems my code isn't responding to this change. I wonder if it's because the const int was resolved at compile time even though it wound up getting bumped at runtime. Does this sound right, and, using GDB is there any way I can bypass this effect without having to edit the code in my program? It takes a while just to warm up to 500.
I suspect that the compiler, seeing that the variable is const, is inlining the constant into the assembly and not having the generated code actually read the value of the MAX_CONNECTIONS variable. The C++ spec is worded in a way where if a variable of primitive type is explicitly marked const, the compiler can make certain assumptions about it for the purposes of optimization, since any attempt to change that constant is either (1) illegal or (2) results in undefined behavior.
If you want to use GDB to do things like this, consider marking the variable volatile rather than const to indicate to the compiler that it shouldn't optimize it. Alternatively, have this information controlled by some other data source (say, a configuration option inside a file) so that you aren't blasting the program's memory out from underneath it in order to change the value.
Hope this helps!
By telling it it's const, you're telling the compiler it has freedom to not load the value, but to build it directly into the code when possible. An allocated copy may still exist for those times when the particular instructions chosen need to load a value rather than having an immediate value, or it could be omitted by the compiler as well. That's a bit of a loose answer short on standardese, but that's the basic idea.
As this post is quite old, my answer is more like a reference to my future self. Assuming you compiled in debug mode, running the following expression in the debugger (lldb in my case) works:
const_cast<int&>(MAX_CONNECTIONS) = 750
In case you have to change the constant often, e.g. in a loop, set a breakpoint and evaluate the expression each time the breakpoint is hit
breakpoint set <location>
breakpoint command add <breakpoint_id>
const_cast<int&>(MAX_CONNECTIONS) = 750
DONE
I'm having a weird problem trying to get my release build working in Xcode 4.2 using llvm. I've turned off all optimisation settings for the release scheme, and as far as I can tell the release build matches all the settings of the debug build. Regardless of this, the following problem occurs when working with some structures from Box2D, a physics library - but I am unsure if the problem has anything todo specifically with that.
b2CircleShape* circleShape = new b2CircleShape();
circleShape->m_p.Set(0,0);
circleShape->m_radius = m_radius;
b2FixtureDef fixture;
fixture.shape = circleShape;
fixture.density = m_density;
m_fixtureDefs.push_back(fixture); // std::vector
b2FixtureDef fix2 = fixture;
b2FixtureDef fix3 = m_fixtureDefs[0] // EXC_BAD_ACCESS
When I remove all instances of access to m_fixtures, no problems occur. When I run in the development scheme no errors occur. I am really, really confused, if someone could point me in the right direction for errors to look for it would be much appreciated
EDIT:
More interesting stuff
for (vector<b2FixtureDef>::iterator i = m_fixtureDefs.begin() ; i != m_fixtureDefs.end(); ++i)
{
}
This appears to loop forever, making me very confused. It seems like the structure m_fixturesDef has some kind of problem, but I have no idea why whatever weird corruption is going on is only occurring in this particular variable.
By default POD objects are not initializes in C++ so their initial value (unless explicitly initialized) are inherently random.
When you build in debug mode the compiler usually inserts extra initialization code to zero out values. Thus you can easily see different behaviors between debug and release builds.
A quick way to find this kind of problem is to check your compiler warnings; see if you are using a variable before it has been initialization (you may need to turn on the warnings) or something similar.
Note: You can fix a lot of serious problems by making sure your code compiles with zero warnings with the warning level as high as reasonable (usually a step above default). (a warning is really a logical error in your code).
I am working on some C++ code and am having some problems with the function described below. I haven't used much C++ before, at least not for a long time and so i'm trying to learn as I go along to a large extent. The win32api doesn't help much with the confusion factor either...
The function is succesfully called twice, before failing when called at a later stage when it is called in the application.
PTSTR getDomainFromDN(PTSTR dnPtstr) {
size_t nDn=wcslen(dnPtstr);
size_t *pnNumCharConverted = new size_t;
wchar_t *szTemp = new wchar_t[10]; // for debugging purposes
_itow_s((int)nDn,szTemp,10,10); // for debugging purposes
AddToMessageLog(EVENTLOG_ERROR_TYPE,szTemp); // for debugging purposes (displays an integer value before failing)
AddToMessageLog(EVENTLOG_ERROR_TYPE,TEXT("Marker A")); // for debugging purposes
char *dn = new char[nDn];
// !!!!!!!!!!!! all goes wrong here, doesn't get to next line, nDn does have a value when it fails (61)
AddToMessageLog(EVENTLOG_ERROR_TYPE,TEXT("Marker B")); // for debugging purposes
wcstombs_s(pnNumCharConverted,dn,nDn+1,dnPtstr,nDn+1);
...more code here...
delete[] dn;
delete pnNumCharConverted;
return result
}
At first i thought it was a memory allocation problem or something as it fails on the line char *dn = new char[nDn];, the last marker showing as 'Marker A'. I used delete[] on the pointer further down to no avail. I know that nDn is a value because I print this out to a message log using _itow_s for debugging. I also know that dnPtrstr is a PTSTR.
I tried using malloc as well with free() in the old C style but this doesn't improve things.
I tried sanitizing your code a bit. One of the big tricks to C++ is to not explicitly use memory management when it can be avoided. Use vectors instead of raw arrays. Strings instead of char pointers.
And don't unnecessarily allocate objects dynamically. Put them on the stack, where they're automatically freed.
And, as in every other language, initialize your variables.
PTSTR getDomainFromDN(PTSTR dnPtstr) {
std::wstring someUnknownString = dnPtstr;
size_t numCharConverted = 0;
std::wstring temp; // for debugging purposes
std::ostringstream sstr;
sstr << temp;
AddToMessageLog(EVENTLOG_ERROR_TYPE,sstr.str().c_str()); // for debugging purposes (displays an integer value before failing)
AddToMessageLog(EVENTLOG_ERROR_TYPE,TEXT("Marker A")); // for debugging purposes
std::vector<char> dn(someUnknownString.size());
AddToMessageLog(EVENTLOG_ERROR_TYPE,TEXT("Marker B")); // for debugging purposes
wcstombs_s(&numCharConverted, &dn[0], dn.size(), someUnknownString.c_str(), dn.size());
...more code here...
return result
}
This might not have solved your problem, but it has eliminated a large number of potential errors.
Given that I can't reproduce your problem from the code you've supplied, this is really the best I can do.
Now, if you could come up with sane names instead of dnPtstr and dn, it might actually be nearly readable. ;)
i think your problem is this line:
wcstombs_s(pnNumCharConverted,dn,nDn+1,dnPtstr,nDn+1);
because you are telling wcstombs_s to copy up to nDn+1 characters into dn which is only nDn characters long.
try changing the line to:
wcstombs_s(pnNumCharConverted,dn,nDn,dnPtstr,nDn);
or perhaps better yet:
wcstombs_s(pnNumCharConverted,dn,nDn,dnPtstr,_TRUNCATE);
im not sure how you are debugging this or how AddToMessageLog is implemented, but if you are just inspecting the log to trace the code, and AddToMessageLog is buffering your logging, then perhaps the error occurs before that buffer is flushed.
If you are sure that "char *dn = new char[nDn];" is failing, TRY "set_new_handler" -> http://msdn.microsoft.com/en-us/library/5fath9te(VS.80).aspx
On a side note, few things:
The very first operation "size_t nDn=wcslen(dnPtstr);" is not 100% correct. You are doing wcslen on dnPtstr assuming dnPtstr to be unicode. However, this is not the case since it could be PWSTR or PSTR based on whether UNICODE is defined or not. So, use _tcslen(). Its better if you give some time to understand UNICODE, NON-UNICODE stuff since they would help you a lot in Windows C++ development.
Why are you using so many "new" if you are using these variables only in this function (I am assuming it). Prefer stack over heap for local variables unles you have a definite requirement.
I'm having problem with dynamic_cast. i just compiled my project and tested every thing in debug mode and then i tried compiling it in release mode, i have copied every configuration from debug mode exept optimization parameter which is now /o2, (while debuging i set it as /od) the project compiled but when it starts loading my resources i got exception in the piece of code here :
for(int j = 1; j < i->second->getParametersNumber();j++)
{
CCTMXTiledMap* temp = CCTMXTiledMap::tiledMapWithTMXFile(i->second->As<string>(j).c_str());
CCTMXLayer* ret = NULL;
for(NSMutableArray<CCNode*>::NSMutableArrayIterator l=temp->getChildren()->begin();!ret && l!=temp->getChildren()->end();l++)
ret = dynamic_cast<CCTMXLayer*> (*l);
t1.first = ret;
templates[i->first].second.push_back(t1);
templates[i->first].second.back().first->retain();
}
nothing in code changed and when I check in debugger every variable in classes is what it should be but dynamic cast is throwing std::__non_rtti_object. what am i doing it wrong? and i'm using cocos2d-x ,I didn't have enough reputation to add that tag!
Does CCNode have any virtual functions? Are all elements of temp->getChildren()->begin() really CCNodes? Does temp->getChildren() return a reference? The latter is especially insidious: you call both temp->getChildren()->begin() and temp->getChildren()->end(). If getChildren() returns a copy, you're taking the begin of one copy and the end of another copy.
In this case after many code changes I found out there has to be some bugs which show themselves when code is optimized (still don't know if it's compiler's mis optimization or my code has some problems but it's probably mine). and the main reason for that problem was with *l being NULL.