I'm chasing a bug where a member value of an object seems to magically change, without any methods being called which modify it. No doubt something obvious but proving hard to track down.
I know I can put conditional break-points in methods based on the variable value, but is it in any way possible to actually put a breakpoint on a variable itself? e.g a breakpoint which fires when x==4? I know I can put watches on, what about breakpoints?
Edit: this is a native-only project, no managed malarkey.
You can use a data breakpoint. There are a number of restrictions about how and when they can be used, namely that they work only in native code.
(To the best of my knowledge, you can only tell it to break when the variable changes, not when it changes to a specific value, but I'm not entirely sure; most of my code is mixed managed/native and thus can't use data breakpoints).
What you should do is just wrap the variable in a set/get - not just a template functions but actually in a separate class, where set/get MUST be used to access. Then put a breakpoint in there. Alternatively, for easier chop and change, you could wrap the value in a class, and use operator overloads (with appropriate breaks in) to alter. That's probably the cleanest and most portable solution.
What you may also find is that the variable being modified is not in ways you expect. Best example I've got is that I had unsigned int where I subtracted from zero when I meant to increment from zero, so when I was looking for places that I knew modified it, that didn't flag up. Couldn't work out wtf was going on.
However, as far as I know, VC++ supports no mechanism to break on arbitrary changes, if the data breakpoint won't work for you. for example, if it was changed due to stack/heap corruption. But if you're running in debug, I'd expect that VC++ would break on those.
Related
Lets say I have a very costly function that checks if an object has a certain property. Another function would then, depending on whether the object has the property, do different things.
If I have previously checked for the property, would it be recomputed by the second function, or is it known?
I'm thinking of something like:
bool check_property(object){
// very costly operations...
}
void do_something(object){
if(check_property) {do thing}
else {do different thing}
}
Would the if in do_something recompute check_property?
There are several factors that have to come together for the compiler to avoid recomputing the function's result:
The compiler has to know which input values the function's result depends on. This knowledge is very difficult to extract from the code in general case. In some implementations you can help the compiler by using compiler-specific means to declare your function as "pure" or "const" (GCC function attributes)
The compiler has to make sure that the above input values did not change since the previous call to the same function. This might be very easy in some specific case, but is also very difficult in general case.
The compiler has to have the result of previous computation readily available. Normally, compilers do not deliberately "cache" such results in some dedicated storage for future reuse. The optimization in question is typically applied only when you make multiple calls to the same function in "close proximity" to each other, meaning that the previous result is easy to keep till the moment of the next call.
So, the optimization in question is certainly possible. But it is something you should expect to see in simple and very localized cases, like calling sqrt(x) several times in a row for the same value of x (in the same expression, in the same cycle and such). But for more complicated functions it is typically going to be your responsibility to either somehow avoid making multiple calls to the same expensive function, or maybe memoize the results if you believe it can benefit your code.
Unless the compiler can prove that check_property has no side effects and that all the data it depends from is the same, it is not allowed to remove the call; for all practical purposes, unless your function body is known in the current TU, it is pretty much trivial and the multiple calls happen in the same function, calling again will execute its code again. I don't know of any compiler that establish automatically a cross-call cache, because it's not trivial at all.
If you need to cache the computed values, in general you will have to do it yourself; keep in mind that it's not always trivial - generally the ugly beasts to tackle are cache invalidation (how do I know that the data used to calculate the value didn't change from the last time I calculated it? how do I avoid the cache size getting out of hand?) and multithreading concerns (is this code going to be called from multiple threads? if so, I have to synchronize the access to the cache, possibly adding coupling between unrelated threads and, in extreme cases, killing the efficiency of the cache itself).
To answer your question, yes. It will rerun it. If you want to make sure that the code doesn't run it again every time you call do_something, try adding a variable in your class that will tell you if you already ran it:
bool check_property(object){
// very costly operations...
return true;
}
void do_something(object,bool has_run){
if(has_run) {do thing}
else {do different thing}
}
void main() {
bool has_run = false;
has_run = check_property(object);
do_something(object,has_run);
}
There are of course multiple ways of doing this, and this might not fit your criteria, but it is a possible way of doing it!
I just realized that this isn't really how C++ works since everything is not in classes unlike Java. Instead you can just pass the value as an argument to the function itself. So, I have edited my code.
Ive been reading a excellent book written by Bjarne Stroustrup and he recommends that you declare variables as late as possible, preferable just before you use it, however it fails to mention any benefits over declaring the variables late than at the start of the function body.
So what is the benefit of declaring variable late like this:
int main()
{
/* some
code
here
*/
int MyVariable1;
int MyVariable2;
std::cin >> MyVariable1 >> MyVariable2;
return(0);
}
instead of at the start of a function body like this:
int main()
{
int MyVariable1;
int MyVariable2;
/* some
code
here
*/
std::cin >> MyVariable1 >> MyVariable2;
return (0);
}
It makes the code easier to follow. In general, you declare variables when you need them, e.g. near a loop when you want to find a minimum of something via that loop. In this way, when someone reads your code, (s)he doesn't have to try to decipher what 25 variables mean at the start of a function, but the variables will "explain" themselves when going through the code. After all, it's not important to know what variables mean, but to understand what the code does.
Remember that most of the time you use that local variable in a very small portion of your code, so it makes sense to define it in that small portion where you need it.
A few points that comes to mind
Not all objects are default - constructible , so many times declaring the object in the beginning of the function is not an option, only on assignment (aka auto myObj = creationalfunction();)
your function gets smaller number of lines, hence more readable. declaring each variable in the beginning of the function really makes it a few lines bigger, throughout the code.
if your function throws - it's not economical to build a list of objects, just to destroy them on stack-unwinding
declaring variables in the same line they are assigned can let you use auto, which makes the code times more flexible.
it's the common convention for C++ these days, and that is pretty important.
create an object + assign it later on might be more slow than directly initialize an object with values.
If "other code" is a page of code then you can't actually see the declaration on the screen when you read the values. If you thought that you were reading two doubles, you can't see on the screen that you are wrong. If you declare the variable on one line and use it on the next, any mistake would be obvious.
Suppose, that you deal with some objects and construction of these objects is an expensive operation. In such situation there are a few reasons why it is better to define variables just before their usage:
1) First of all, it is sometimes faster to create an object using appropriate constructor instead of default-constructing and assignment. So this:
T obj(/* some arguments here */);
may be faster then this:
T obj;
/* some code here*/
obj = T(/* some arguments here */);
Note that in the first example only a single constructor is invoked. But in the second example default constructor and assignment operator are invoked.
2) If an exception is thrown somewhere between object definition and its first usage you just do unnecessary work creating and destroying your object without any usage at all. The same is applicable when function returns between object definition and its first usage.
3) Yes, readability is also worth to mention here :)
When starting to get good at programming you will usually end up holding the entire program in your head at the same time. Later, you will learn how to reduce this to one function in your head.
Both of these limit how large/complex a program or function you can work with. You can help this problem by simplifying what is going on so you no longer have to think about it: reduce your working memory needs. Also you can trade one kind of complexity for another; fsncy variable value dancing for some complex higher level algorithm, or for certainty of code correctness.
There are many ways to do this. You can work with chunkable patterns, and think in those patterns instead of in lower level primitives (this is basically what you did when you graduated from whole program state to single function state). You can also do this by making your state simpler.
Every variable carries state. It modifies what that line of code means, and what every previous line of code means up to the point of its declaration. A variable that exists on a line could be modified by the line or read by the line. To understand what the reading of a variable means, you have to audit every line between its declaration and its use for the possibility it is edited.
Now, this may not happen: but checking it both takes time and working memory. If you have 10 variables, having to remember which of them where modified "above" and which not and what their values mean can burn a lot of headspace.
On the other hand, a variable created, used, and either falling out of scope or never used again is not going to cause this cognitive load. You do not have to check for hidden state or meaning. What more, you are not tempted -- indeed not able -- to use it prior to that line. You are definitely not going to overwrite important state that later code relies on when you set it, and you are not going to have it modified to something surprising between initialization and use.
In short, reduce the "state space" of the lines of code you use it, and even don't use it in.
Sometimes this is difficult to achieve, and sometimes impractical or impossible. But quite often it is easy, improves code quality, makes it easier to read or understand. The most important audience of code is humans, there is a reason we don't check in the object file output of a compiler (or some intermediate representation).
Suc "low state" code is also way easier to modify after the fact. In the limit, it becomes pure functional code.
I'm doing some performance tuning work. Basically, I have found a potential bottleneck in our code base and thinking of the best solution for it. I will try to keep this question as simple as possible. Basically, I have a method that will work with a set of double values (std::set). So the method signature looks something like:
void MyClass::CalculateStuff(const std::set<double> & mySet);
There are several places in the code that call this method. Only a few of these places need to work with this set while others don't care about the set. I suppose I could create another version of this method that includes this set and modify the existing one to use an empty set. However, this would create some overhead for the places that don't care about the set (because they would have to make additional method calls). So the other option I thought of was using a pointer to a set argument instead, like so:
void MyClass::CalculateStuff(const std::set<double> * pMySet);
The validity of the pointer would determine whether we want to use the set or not (i.e. passing NULL pointer for the set argument means that we do no work associated with the set). This would be faster but obviously not clean from an interface perspective. I suppose I could heavily comment the code.
What do you think should be done? This is probably not a huge deal but it got me thinking about how far you should go to make your code faster (if performance is very important in an application) vs. making sure the code is still clean and manageable. Where should the line be drawn in this case?
well i do games programming. and for me. i only worry about potential bottlenecks when the game starts lagging then i profile etc.
passing in by pointer will be more memory efficent as a pointer to a object is only 4bytes. but i do suggest when you pass it null. that you use nullptr as thats the current standard to define a null pointer rather then NULL or 0.
im guessing you are passing in a set to calculate something with the class's set. in which case maybe overloading operators would be the best option for you?
As per the title, I am planning to move some legacy code developed a decade+ ago for AIX. The problem is the code base is huge. The developers didn't initialize their pointers in the original code. Now while migrating the code to the latest servers, I see some problems with it.
I know that the best solution is to run through all the code and initialize all the variables whereever required. However, I am just keen to know if there are any other solutions available to this problem. I tried google but couldn't find an appropriate answer.
The most preventive long-term approach is to initialize all pointers at the location they're declared, changing the code to use appropriate smart pointers to manage the lifetime. If you have any sort of unit tests this refactoring can be relatively painless.
In a shorter term and if you're porting to Linux you could use valgrind and get a good shot at tracking down the one or two real issues that are biting you, giving you time to refactor at a more leisurely pace.
Just initializing all the variables may not be a good idea.
Reliable behavior generally depends on variables having values known to be correct ("guaranteed by construction" to be correct). The problem with uninitialized variables isn't simply that they have unknown values. Obviously being unknown is a problem, but again the desired sate is having known and correct values. Initializing a variable to a known value that is not correct does not yield reliable behavior.
Not infrequently it happens that there is no 'default' value that is correct to use as a fallback if more complicated initialization fails. A program may choose not to initialize a variable with a value if that value must be over-written before the variable can be used.
Initializing a variable to a default value may have a few problems in such cases. Often 'default' values are inoffensive in that if they are used the consequences aren't immediately obvious. That's not generally desirable because as the developer you want to notice when things go wrong. You can avoid this problem by picking default values that will have obvious consequences, but that doesn't solve a second issue; Static analyzers can often detect and report when an uninitialized variable is used. If there's a problem with some complicated initialization logic such that no value is set, you want that to be detectable. Setting a default value prevents static analysis from detecting such cases. So there are cases where you do not want to initialize variables.
With pointers the default value is typically nullptr, which to a certain extent avoids the first issue discussed above because dereferencing a null pointer typically produces an immediate crash (good for debugging). However code might also detect a null pointer and report an error (good for debugging) or might fall back to some other method (bad for debugging). You may be better off using static analysis to detect usages of uninitialized pointers rather than initializing them. Though static analysis may detect dereferencing of null pointers it won't detect when null pointers cause error reporting or fallback routines to be used.
In response to your comment:
The major problems that i see are
Pointers to local variables are returned from functions.
Almost all the pointer variables are not initialized. I am sure that AIX does provide this comfort for the customer in the earlier platform however i really doubt that the code would run flawlessly in Linux when it is being put to real test (Production).
I cannot deliver partial solutions which may work. i prefer to give the best to my customer who pays me for my work. So Wont prefer to use workarounds.
Quality cannot be compromised.
fix them (and pay special attention to correctly cleaning up)
As I argue above simply lacking an initializer is not in and of itself a defect. There is only a defect if the uninitialized value is actually used in an illegal manner. I'm not sure what you mean about AIX providing comfort.
As I argue above the 'partial solution' and 'workaround' would be to blindly initialize everything.
Again, blindly initializing everything can result not only in useless work, but it can actually compromise quality by taking away some tools for detecting bugs.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
is const (c++) optional?
In c++ or any programing language, what is the point of declaring a variable const or constant? I understand what const does, but isn't it safer to declare everything not constant, because doesn't the programmer know whether or not to change the variable? I just don't see the objective of const.
If the programmer (or programming team) can successfully keep track of every detail of every variable and not accidentally assign a value to a constant, then by all means don't declare constants const.
The addition of const to the language is useful for preventing easily preventable errors, unlike languages of the dinosaur era where evil things would happen.
Here is an approximation of a bug I once had to track down in a huge FORTRAN 77 application. FORTRAN 77 passes parameters by reference unless extraordinary measures are taken:
subroutine increment(i)
integer i
i = i + 1
end
subroutine process ()
call increment (1)
call someprocedure (1, 2, 3)
...
The result was that someprocedure() was called with (2, 2, 3)!
but isn't it safer to declare everything not constant, because doesn't the programmer know whether or not to change the variable?
That is exactly wrong. It is "safer" to ensure that what is supposed to be a constant value is not changed by mistake. It conveys the intent of the value to all programmers who may stumble upon it in the future. If a program assumes that a value should never change then why allow it to be? That may potential cause very hard to track down bugs.
Don't assume your program is correct, make it so (as much as is possible) using the utilities that your language provides. Most real-world projects are not completed by one guy sitting in his basement. They involve multiple programmers and will be maintained for many years, often by a group of people who had nothing to do with the initial version.
Please don't make me or anyone else guess as to what your design decisions were, make them explicit whenever possible. Hell, even you will forget what you were thinking when you come back to a program you haven't touched for some time.
because doesn't the programmer know whether or not to change the variable?
No. You will write a lot of code for other programmers. They may want to change that value.
Maybe, you make a mistake and you change the value unintentionally. If it was const, it wouldn't have let you. Const is also very useful for overloading operators.
Yes you are right when you say that the basic idea is to make sure constant variables prevents programming errors . But one additional use is that the interface you provide to your client also ensures that whatever you want to be const remains constant!It prevents people who use your code from violating the constraint.
This will come in handy especially in OOP. By making sure your object is const you can write a lot of code without worrying about the consequences. The objects could be used by a new programmer or a client who would have to keep the property. Const iterators are also very handy.
This link should help you out
const helps the compiler to optmize code. Temporaries are bound to const referances. So, the life time of temporaries can be increased a bit more using const.
In web programming, it can be so that the variable isn't changed (sometimes maliciously) via injection.
It also helps when you may accidentally change the variable but know ahead of time that that isn't desired.
Constants are useful when writing huge programs, where you may get lost and forget what a variable (or constant) does or whether you are allowed to change it's value or not.
Moreover and more important, It's important in teamwork and in writing libraries for others. If you write code for others, they may change a variable you intend to keep with constant value, so declaring it as constant is much useful.
Also it's useful when you want to use the same number in places where only constants are allowed, for example, delcaring static arrays. In this case you cannot determine the size of the array with a variable, also it's very annoying to review all the code and change the number everytime you want to change the size of the array, so declaring that number as a constant is very useful and I really had an experience with it.
Hope that's convincing, that's what I remember for now, I will come back and edit should I remember something else.
Have a nice programming :)