C++ Tracking primitive type value change - c++

i have a complex program with weird bug that some int value is down to zero unexpectedly.
so i want tracking this built-in type value, then i could debug easily.
to do that, i made following ValueWatcher template class so i could track almost changes of value except when ValueWatcher is dereferencing. (i made these dereferencing operators because the program needs int *, &)
template <typename T>
class ValueWatcher
{
public:
ValueWatcher(const T &val)
{
cout << "constructor with raw value " << val << endl;
_cur = _old = val;
}
ValueWatcher(const ValueWatcher& vw)
{
cout << "constructor with ValueWatcher " << vw._cur << endl;
_cur = vw._cur;
}
ValueWatcher& operator=(const ValueWatcher &rhs)
{
cout << "operator= with ValueWatcher " << rhs._cur << endl;
_cur = rhs._cur;
onChanged();
return *this;
}
ValueWatcher& operator=(const T &val)
{
cout << "operator= with " << val << endl;
_cur = val;
onChanged();
return *this;
}
int *operator&()
{
cout << "addressing operator" << endl;
// can't track anymore!!!!!!!!!!!!!!!!!!!!!!!!!
return &_cur;
}
operator int&()
{
cout << "operator int&" << endl;
// can't track anymore!!!!!!!!!!!!!!!!!!!!!!!!!
return _cur;
}
operator int&() const
{
cout << "const operator int&" << endl;
return _cur;
}
operator int() const
{
cout << "operator int" << endl;
return _cur;
}
private:
void onChanged()
{
// update old and do proper action
}
T _cur;
T _old;
};
the problem is, when client code wants int & or int * of ValueWatcher, - it can gives int & or int * anyway but - int * or & cannot hold ValueWatcher instance, so can't tracking anymore.
is there anyway to solve this? i think it can be solved by returning reference or pointer class instance instead of just returning & or * of built-int type. but i don't know how to do that.
in addition-
i can't run this program with debugger. the problem occurs only in REAL environment and very hard to reproduce.

If you can reproduce the behavior when running in a debugger, you should be able to set a value change or memory change breakpoint. This is probably easier than introducing a proxy implementation.

Its probably not the best solution, but what if your * or & return a pointer/reference to your value watcher? Otherwise I would forbid the use of * or &. (By not implementing it or making it private).

I don't think this is possible. Once you return an int* or int&, you've lost the ability to track anything. The only way (and the correct way, IMO) to do it that I can think of is to use a debugger and set a watch point with an appropriate condition. When the condition is met the debugger will interrupt and halt the program so you can inspect the memory, call stack, etc.

If you can spare a PAGE_SIZE bytes for your variable, then you can lock this part of memory using VirtualProtect (if you're on windows) - you can set read only access, for example. After that, anything that tries to access that variable will crash the program (so you'll be able to write memory dump and pinpoint routine that changes variable). I used this technique to pinpoint similar problem (multithreaded app, something was randomly overwriting memory blocks). If you can't debug machine immediately, try writing dumps using MiniDumpWriteDump . You will be able to debug memory dumps using WinDBG or Visual Studio.

If you're seriously desperate:
#define int ValueWatcher<int>
In a better scenario, you'd use
//typedef int intt;
typedef ValueWatcher<int> intt;
Then re-write all your code that wants an int and replace it. Replace int* with intt*. Replace int& with intt&.

You say you only see this problem when not debugging, so I'm guessing you have a bug which is obscure and can only be seen when building with optimizations. There are a couple possible explanations for this behavior:
You have a race condition somewhere
You didn't initialize a variable properly... so when building with optimizations you're values are initialized differently than when debugging.
You have a buffer overrun somewhere which is writing over one of your variables. Again, this could be something you only see when built with optimizations... When you build for debugging the compiler is going to leave extra space around variables on the stack... which acts as a cushion and can keep some bugs from revealing themselves.
Here is a relevant SO post which explains these issues in more detail:
Program only crashes as release build -- how to debug?

Related

How to detect mid-function value changes to const parameter?

I ran into a nasty bug in some of my code. Here's the simplified version:
#include <iostream>
class A
{
public:
std::string s;
void run(const std::string& x)
{
// do some "read-only" stuff with "x"
std::cout << "x = " << x << std::endl;
// Since I passed X as a const referece, I expected string never to change
// but actually, it does get changed by clear() function
clear();
// trying to do something else with "x",
// but now it has a different value although I declared it as
// "const". This killed the code logic.
std::cout << "x = " << x << std::endl;
// is there some way to detect possible change of X here during compile-time?
}
void clear()
{
// in my actual code, this doesn't even happen here, but 3 levels deep on some other code that gets called
s.clear();
}
};
int main()
{
A a;
a.s = "test";
a.run(a.s);
return 0;
}
Basically, the code that calls a.run() use to be used for all kinds of strings in the past and at one point, I needed the exact value that object "a.s" had, so I just put a.s in there and then some time later noticed program behaving weird. I tracked it down to this.
Now, I understand why this is happening, but it looks like one of those really hard to trace and detect bugs. You see the parameter declared as const & and suddenly it's value changes.
Is there some way to detect this during compile-time? I'm using CLang and MSVC.
Thanks.
Is there some way to detect this during compile-time?
I don't think so. There is nothing inherently wrong about modifying a member variable that is referred by a const reference, so there is no reason for the compiler to warn about it. The compiler cannot read your mind to find out what your expectations are.
There are some usages where such wrong assumption could result in definite bugs such as undefined behaviour that could be diagnosed if identified. I suspect that identifying such cases in general would be quite expensive computationally, so I wouldn't rely on it.
Redesigning the interface could make that situation impossible For example following:
struct wrapper {
std::string str;
};
void run(const wrapper& x);
x.str will not alias the member because the member is not inside a wrapper.

C++ : Ignoring multiple << Operators in a line

Say I have the following code:
int ignored = 0;
StreamIgnore si;
si << "This part" << " should be" << ignored << std::endl;
I want that when this code runs si will simply ignore the rest of the stream.
The thing is I want this to be as efficient as possible.
One obvious solution would be to have:
template <typename T>
StreamIgnore& opertaor<<(const T& val) {
//Do nothing
return *this;
}
BUT, if the code was something like:
StreamIgnore si;
si << "Fibonacci(100) = " << fib(100) << std::endl;
Then I'll have to calculate fib(100) before the //Do Nothing part.
So, I want to be able to ignore the rest completely without any unnecessary computations.
To make this request make sense, think that StreamIgnore could be a StreamIgnoreOrNot class, and the c'tor decides whether to ignore the stream or not by either returning *this and use the stream, or a new StreamIgnore() instance and ignoring the rest.
I thought about using Macros some how but could't come up with something that enables me to use this syntax (i.e "si << X << Y...").
I would appreciate it if someone could suggest a way to do that.
Thanks
I 'd obviously use IOstreams with disabling/enabling the output amounting to setting/clearing std::ios_base::failbit. Doing this will readily prevent formatting and writing the data. It won't prevent evaluation of arguments, though. For that purpose I'd use the logical and operator:
si && si << not_evaluated() << when_not_used();
There is no way to do this (without cheating, as you will see below). Your code is only passed the result of fib(100) and as such cannot short-circuit execution here at all.
However, there is a simple hack you can use:
template<typename T> struct is_ignored { static const bool yes = false; };
template<> struct is_ignored<StreamIgnore> { static const bool yes = true; };
#define S(x) if(!is_ignored<remove_reference<decltype(x)>::type>::yes) x
S(x) << "meep\n";
You may have to add some _Pragmas to disable compiler warnings about the fact that this leads to dead code.

What could cause writing a pointer address to std::cout to crash?

Whenever I output a particular pointer address to std::cout, I get a crash:
bool MyClass::foo() const
{
std::cout << "this prints fine" << std::endl << std::flush;
std::cout << d << std::endl << std::flush; // crash!
return true;
}
Where d is a pointer member of the class, i.e.:
class MyClass {
// ...
private:
MyClassPrivate* d;
};
What could cause the application to crash? Even if it is a NULL pointer, or an initialized pointer, it should still print out the (perhaps invalid) address, right?
The application is compiled in debug mode, if that makes a difference. The function foo is not marked as inline.
Background: I am trying to track down a bug in an external application process. The bug is only caused when another application sends rapid-fire command to the process. I'm using std::cout to trace the execution of the external process.
If this is not a valid pointer, any access to a member field might cause an access violation. Non-virtual methods called on invalid pointers work just fine until they try to access a field, because the call itself doesn't need to dereference this.
For instance, this situation would crash roughly as you describe:
MyClass* instance = nullptr; // or NULL if you're not using C++11
instance->foo(); // will crash when `foo` tries to access `this->d`
There could be an overload of operator<<(ostream &, MyClassPrivate*), that dereferences the pointer. For example there certainly is if MyClassPrivate is really char.
Try std::cout << (void*)d;, see whether or not it makes a difference. If not, zneak's answer seems plausible.

How to profile the memory consumption by a set of C++ classes?

I am trying to figure out the memory consumption by my (C++) program using gprof. The program does not have a gui, it is entirely cli based.
Now, I am new to gprof, so I read a few tutorials, that taught me how to run gprof and spot time consumption.
However, I need to find out the memory consumption by a specific set of classes.
Say there is a program with many types, A, ..., Z. Now I want to run my program and see how many accumulated memory was used by objects of the classes A, E, I, O, U (for example).
Have you guys any ideas or pointers how I could approach this task?
I am not exclusively considering gprof, I am open for any (fos) software that gets the job done.
I have, of course, searched both google and stackoverflow.com for any answer to this problem, but either I use the wrong keywords or there is nobody having this problem out there.
Edit: Suggestions about doing this manually are obvious. Of course I could code it into the application, but its about a great deal of classes I would rather not change. Also, I want to have the total memory consumption, so I cannot only count all created objects, because I would have to track the size of the object individually.
Edit2: I went with a modification of DeadMG's suggestion, which I only have to inherit from. It works pretty well, so, if anybody has as similar problem, try this.
class GlobalObjectCounter {
public:
struct ClassInfo {
unsigned long created;
unsigned long destroyed;
unsigned short size;
ClassInfo() : created(0), destroyed(0), size(0) {}
ClassInfo(unsigned short _size) : created(0), destroyed(0), size(_size) {}
void fmt(std::ostream& os) {
os << "total: " << (this->created) << " obj = " << (this->created*this->size) << "B; ";
os << "current: " << (this->created-this->destroyed) << " obj = " << ((this->created-this->destroyed) * this->size) << "B; ";
}
};
protected:
static std::map<std::string,ClassInfo> classes;
GlobalObjectCounter() {}
public:
static void dump(std::ostream& os) {
for (std::map<std::string,ClassInfo>::iterator i = classes.begin(); i != classes.end(); ++i) {
os << i->first << ": ";
i->second.fmt(os);
os << "\n";
}
}
};
template <class T> class ObjectCounter : public GlobalObjectCounter {
private:
static ClassInfo& classInfo() {
static ClassInfo& classInfo = classes[std::string("") + typeid(T).name()];
classInfo.size = sizeof(T);
return classInfo;
}
public:
ObjectCounter() {
classInfo().created++;
}
ObjectCounter(ObjectCounter const& oc) {
classInfo().created++;
}
ObjectCounter& operator=(const ObjectCounter&) {}
~ObjectCounter() {
classInfo().destroyed++;
}
};
The map lookup is a bit nasty, I admit it, but I didn't have the nerve to take care of storing the iterator for each class. The main issue was that you would have to explicitly initialise it for each counted class. If you know how to do that better, tell me how.
I'm not aware of gprof even attempting to deal with questions of memory usage. The obvious alternative would be valgrind. If you only care about total memory usage, you could also do the job on your own (overload ::operator new and ::operator delete to track how much memory the program has requested). Of course, it's possible that you have some code that obtains memory by other means (e.g., directly calling something like sbrk), but that's fairly unusual. Those don't attempt to track statically allocated and/or stack usage though.
Trivial.
template<typename T> class Counter {
static int count = 0;
Counter() { count++; }
Counter(const Counter&) { count++; }
Counter& operator=(const Counter&) {}
~Counter() { count--; }
};
class A : Counter<A> {
static int GetConsumedBytes() {
return sizeof(A) * count;
}
};
If the use of A involves dynamic memory, then this solution can be improved on. You can also override the global operator new/delete.
GlibC provides statistics on heap memory allocation. Take a look at mallinfo. You could probably obtain statistics at various points during execution and get some kind of idea of how much memory is being used.

Overloading << operator and recursion

I tried the following code:
#include <iostream>
using std::cout;
using std::ostream;
class X
{
public:
friend ostream& operator<<(ostream &os, const X& obj)
{
cout << "hehe"; // comment this and infinite loop is gone
return (os << obj);
}
};
int main()
{
X x;
cout << x;
return 0;
}
When I compile & run this, it's as expected; an infinite loop. If I remove the cout statement inside the friend function, the recursion doesn't happen. Why is it so?
Optimizer decides all your remaining activity has no effect and optimizes it away.
Whether it's right or wrong is a different matter.
In particular:
X x;
creates empty object "x"
cout << x;
calls:
return (os << obj);
which is appending empty object; the compiler notices 'os' hasn't grown any since the last call and shows no promise doing so any further (and nothing else happens) so it decides the whole business is redundant and can be truncated at this point.
In case you call
cout << "hehe"; // comment this and infinite loop is gone
there is some extra activity so the optimizer doesn't remove the following call.
I guess if you initialized x with anything non-empty, or performed any non-null activity other than cout << "hehe";, you'd have recursion running just the same.
In both cases (with and without writing "hehe") Visual Studio 2005 gives the following warning:
warning C4717: 'operator<<' : recursive on all control paths, function will cause runtime stack overflow
In both cases it compiles and in both cases it gives a stack overflow.
However, without the "hehe" the stack overflow occurs a bit sooner.