Inside this following thread routine :
void* Nibbler::moveRoutine(void* attr)
{
[...]
Nibbler* game = static_cast<Nibbler*>(attr);
while (game->_continue == true)
{
std::cout << game->_snake->_body.front()->getX() << std::endl; // display 0
std::cout << game->getDirection() << std::endl; // display 0
game->moveSnake();
std::cout << game->_snake->_body.front()->getX() << std::endl; // display 0
std::cout << game->getDirection() << std::endl; // display 42
}
}
[...]
}
I am calling the member function moveSnake(), which is supposed to modify the positions of the cells forming my snake's body.
void Nibbler::moveSnake()
{
[...]
std::cout << this->_snake->_body.front()->getX() << std::endl; // display 0
this->_snake->_body.front()->setX(3);
this->_direction = 42;
std::cout << this->_snake->_body.front()->getX() << std::endl; // display 3
[...]
}
Although my two coordinates are effectively modified inside my moveSnake() function, they are not anymore when I go back to my routine, where they keep their initial value. I don't understand why this is happening, since if I try to modify any other value of my class inside my moveSnake() function, the instance is modified and it will keep this value back in the routine.
The Nibbler class :
class Nibbler
{
public :
[...]
void moveSnake();
static void* moveRoutine(void*);
private :
[...]
int _direction
Snake* _snake;
IGraphLib* _lib;
pthread_t _moveThread;
...
};
The snake :
class Snake
{
public :
[...]
std::vector<Cell*> _body;
};
And finally the cell :
class Cell
{
public :
void setX(const int&);
void setY(const int&);
int getX() const;
int getY() const;
Cell(const int&, const int&);
~Cell();
private :
int _x;
int _y;
};
The cell.cpp code :
void Cell::setX(const int& x)
{
this->_x = x;
}
void Cell::setY(const int& y)
{
this->_y = y;
}
int Cell::getX() const
{
return this->_x;
}
int Cell::getY() const
{
return this->_y;
}
Cell::Cell(const int& x, const int& y)
{
this->_x = x;
this->_y = y;
}
Cell::~Cell()
{}
On its face, your question ("why does this member not get modified when it should?") seems reasonable. The design intent of what has been shown is clear enough and I think it matches what you have described. However, other elements of your program have conspired to make it not so.
One thing that may plague you is Undefined Behavior. Believe it or not, even the most experienced C++ developers run afoul of UB occasionally. Also, stack and heap corruption are extremely easy ways to cause terribly difficult-to-isolate problems. You have several things to turn to in order to root it out:
Debuggers (START HERE!)
with a simple single-step debugger, you can walk through your code and check your assumptions at every turn. Set a breakpoint, execute until, check the state of memory/variables, bisect the problem space again, iterate.
Static analysis
Starting with compiler warnings and moving up to lint and sophisticated commercial tools, static analysis can help point out "code smell" that may not necessarily be UB, but could be dead code or other places where your code likely doesn't do what you think it does.
Have you ignored the errors returned by the library/OS you're making calls into? In your case, it seems as if you're manipulating the memory directly, but this is a frequent source of mismatch between expectations and reality.
Do you have a rubber duck handy?
Dynamic analysis
Tools like Electric Fence/Purify/Valgrind(memcheck, helgrind)/Address-Sanitizer, Thread-Sanitizer/mudflaps can help identify areas where you've written to memory outside of what's been allocated.
If you haven't used a debugger yet, that's your first step. If you've never used one before, now is the time when you must take a brief timeout and learn how. If you plan on making it beyond this level, you will be thankful that you did.
If you're developing on Windows, there's a good chance you're using Visual Studio. The debugger is likely well-integrated into your IDE. Fire it up!
If you are developing on linux/BSD/OSX, you either have access to gdb or XCode, both of which should be simple enough for this problem. Read a tutorial, watch a video, do whatever it takes and get that debugger rolling. You may quickly discover that your code has been modifying one instance of Snake and printing out the contents of another (or something similarly frustrating).
If you can't duplicate the problem condition when you use a debugger, CONGRATULATIONS! You have found a heisenbug. It likely indicates a race condition, and that information alone will help you hone in on the source of the problem.
Related
In my understanding it is not possible to store the address of a local memory in a global container, because the local variable will eventually be destroyed.
class AA {
std::string name;
public:
explicit AA(std::string n) : name(n) {}
std::string getName() const {
return name;
}
};
class Y {
std::vector<std::reference_wrapper<AA>> x;
public:
void addAA(AA& aa) {
x.emplace_back(aa);
}
AA& getLastA() {
return x.back();
}
};
void foobar(Y& y) {
AA aa("aa");
y.addAA(aa);
}
int main() {
Y y;
foobar(y);
std::cout << y.getLastA().getName() << std::endl; // error - probably because the local aa has been destroyed.
return 0;
}
However, I do not understand why this code works:
void foobar(Y& y, AA& aa) {
y.addAA(aa);
}
int main() {
Y y;
{
AA aa("aa");
foobar(y,aa);
}
std::cout << y.getLastA().getName() << std::endl;
return 0;
}
aa is again created in another scope and should be destroyed. However, it is possible to get its name later in main. The code works fine.
Why don't we get an error in the second case?
In my understanding it is not possible to store the address of a local memory in a global container, because the local variable will eventually be destroyed.
You're starting from a flawed understanding. It's certainly a bad idea to keep the address of a local variable beyond its lifetime, but it's still possible. As you say, it's a bad idea because that pointer will quickly become invalid. Even so, you can take the address of any variable, and the language won't prevent you from storing it wherever you like. Like C before it, C++ lets you do a great many things that are bad ideas, so be careful.
Rust has a borrow checker that permeates the language and is its defining characteristic. In Rust, such code would be forbidden.
C++ is a language with a long history and the reason why there's no error diagnostics for this, I assume, is because when the language was created it was not a concern. At those times the C philosophy was mainstream: "the programmer knows everything". So, it would follow that the programmer knows that they violated lifetime safety, and therefore the error message is redundant.
Now, the C++ committee is preoccupied with making sure that the versions of C++ are as much backwards-compatible as possible, and enforcing the provided snippet to fail may break existing code. Thus, it is not in the near future for lifetime violations to be in the standard any time soon.
What committee seems to slide towards regarding this kind of question is that if you want to enforce something that's not covered by the standard, you should use third-party static analysis tools. There are few of them, though, most notable is clang-tidy, but none of them, as far as I know, support rigorous lifetime analysis to detect the errors demonstrated by your snippet.
I ran into a nasty bug in some of my code. Here's the simplified version:
#include <iostream>
class A
{
public:
std::string s;
void run(const std::string& x)
{
// do some "read-only" stuff with "x"
std::cout << "x = " << x << std::endl;
// Since I passed X as a const referece, I expected string never to change
// but actually, it does get changed by clear() function
clear();
// trying to do something else with "x",
// but now it has a different value although I declared it as
// "const". This killed the code logic.
std::cout << "x = " << x << std::endl;
// is there some way to detect possible change of X here during compile-time?
}
void clear()
{
// in my actual code, this doesn't even happen here, but 3 levels deep on some other code that gets called
s.clear();
}
};
int main()
{
A a;
a.s = "test";
a.run(a.s);
return 0;
}
Basically, the code that calls a.run() use to be used for all kinds of strings in the past and at one point, I needed the exact value that object "a.s" had, so I just put a.s in there and then some time later noticed program behaving weird. I tracked it down to this.
Now, I understand why this is happening, but it looks like one of those really hard to trace and detect bugs. You see the parameter declared as const & and suddenly it's value changes.
Is there some way to detect this during compile-time? I'm using CLang and MSVC.
Thanks.
Is there some way to detect this during compile-time?
I don't think so. There is nothing inherently wrong about modifying a member variable that is referred by a const reference, so there is no reason for the compiler to warn about it. The compiler cannot read your mind to find out what your expectations are.
There are some usages where such wrong assumption could result in definite bugs such as undefined behaviour that could be diagnosed if identified. I suspect that identifying such cases in general would be quite expensive computationally, so I wouldn't rely on it.
Redesigning the interface could make that situation impossible For example following:
struct wrapper {
std::string str;
};
void run(const wrapper& x);
x.str will not alias the member because the member is not inside a wrapper.
I have a class complicated which features various setters that modify some internal state. That internal state modification is potentially expensive, so I want to do it not too often. In particular, if several setters are invoked in immediate succession, I want to perform the expensive update of the internal state only once after the last of these setter invocations.
I have solved (or "solved"?) that requirement with a proxy. The following would be a minimal working code example:
#include <iostream>
class complicated
{
public:
class proxy
{
public:
proxy(complicated& tbu) : to_be_updated(&tbu) {
}
~proxy() {
if (nullptr != to_be_updated) {
to_be_updated->update_internal_state();
}
}
// If the user uses this operator, disable update-call in the destructor!
complicated* operator->() {
auto* ret = to_be_updated;
to_be_updated = nullptr;
return ret;
}
private:
complicated* to_be_updated;
};
public:
proxy set_a(int value) {
std::cout << "set_a" << std::endl;
a = value;
return proxy(*this);
}
proxy set_b(int value) {
std::cout << "set_b" << std::endl;
b = value;
return proxy(*this);
}
proxy set_c(int value) {
std::cout << "set_c" << std::endl;
c = value;
return proxy(*this);
}
void update_internal_state() {
std::cout << "update" << std::endl;
expensive_to_compute_internal_state = a + b + c;
}
private:
int a;
int b;
int c;
int expensive_to_compute_internal_state;
};
int main()
{
complicated x;
x.set_a(1);
std::cout << std::endl;
x.set_a(1)->set_b(2);
std::cout << std::endl;
x.set_a(1)->set_b(2)->set_c(3);
}
It produces the following output which is looking like exactly what I wanted:
set_a
update
set_a
set_b
update
set_a
set_b
set_c
update
My questions are: Is my approach legit/best practice?
Is it okay to rely on temporary objects (i.e. the proxy objects which are returned) which will be destroyed at the semicolon?
I'm asking because I have a bad feeling about this for some reason. Maybe my bad feeling comes just from Visual Studio's warning which says:
Warning C26444 Avoid unnamed objects with custom construction and
destruction (es.84).
But maybe/hopefully my bad feelings are unjustified and that warning can just be ignored?
What bothers me the most: Is there any case in which the update_internal_state method will NOT be called (maybe by misusing my class or by some compiler optimization or whatever)?
Lastly: Is there any better approach to implement what I try to achieve with modern C++?
I think your solution is legit, but it has a drawback that it hides from the user of the code, that the update is expensive, so one will more likely write:
x.set_a(1);
x.set_b(2);
than
x.set_a(1)->set_b(2);
I would suggest make setters private and add a friend transaction class, so that modifying object would look like:
complicated x;
{
transaction t(x);
t.set_a(1);
t.set_b(2);
// implicit commit may be also done in destructor
t.commit();
}
If transaction will be the only way to modify complicated - users will more tend to call several setters in a one transaction.
The danger I see here is if your class has any methods that do not return a proxy (or any public members). You disable the update call if operator-> of the proxy is used (which yields the complicated), but this is only safe if that usage of operator-> always yields another proxy object which will take over the updating task. This seems like a huge pitfall for anybody who modifies the class later on.
I think it would be safer if complicated were to keep track of the number of alive proxy objects created on it so that the last proxy to be destroyed performs the update call.
Following Dmitry Gordon's argument of people selecting 'wrong' approach, you might have the matter a bit simpler, though (especially from user's view):
class Demo
{
int m_x
int m_y; // cached value, result of complex calculation
bool m_isDirtyY;
public:
int x() { return m_x; }
void x(int value) { m_x = value; m_isDirtyY = true; }
int y()
{
if(m_isDirtyY)
{
// complex calculations...
m_y = result;
m_isDirtyY = false;
}
return m_y;
}
};
This way, you'll only ever execute the calculations on need, no additional extensions like the proxy objects or explicit transactions.
Depending on your needs, you might perhaps encapsulate this pattern in a separate (template?) class, perhaps receiving an updater object (lambda?) or with a pure virtual update function to repeat less code.
Side note: Setting a value might invalidate more than one cached value – no problem, set more than one dirty flag to true then...
Suppose I am using a library which implements the function foo, and my code could look something like this:
void foo(const int &) { }
int main() {
int x = 1;
foo(x);
std::cout << (1/x) << std::endl;
}
Everything works fine. But now suppose at one point either foo gets modified or overloaded for some reason. Now what we get could be something like this:
void foo(int & x) {
x--;
}
void foo(const int &) {}
int main() {
int x = 1;
foo(x);
std::cout << (1/x) << std::endl;
}
BAM. Suddenly the program breaks. This is because what we actually wanted to pass in that snippet was a constant reference, but with the API change suddenly the compiler selects the version we don't want and the program breaks unexpectedly.
What we wanted was actually this:
int main() {
int x = 1;
foo(static_cast<const int &>(x));
std::cout << (1/x) << std::endl;
}
With this fix, the program starts working again. However, I must say I've not seen many of these casts around in code, as everybody seems to simply trust this type of errors not to happen. In addition, this seems needlessly verbose, and if there's more than one parameter and names start to become longer, function calls get really messy.
Is this a reasonable concern and how should I go about it?
If you change a function that takes a const reference so that it no longer is a const, you are likely to break things. This means you have to inspect EVERY place where that function is called, and ensure that it is safe. Further having two functions with the same name, one with const and one without const in this sort of scenario is definitely a bad plan.
The correct thing to do is to create a new function, which does the x-- variant, with a different name from the existing one.
Any API supplier that does something like this should be severely and physically punished, possibly with slightly less violence involved if there is a BIG notice in the documentation saying "We have changed function foo, it now decrements x unless the parameter is cast to const". It's one of the worst possible binary breaks one can imagine (in terms of "it'll be terribly hard to find out what went wrong").
i have a complex program with weird bug that some int value is down to zero unexpectedly.
so i want tracking this built-in type value, then i could debug easily.
to do that, i made following ValueWatcher template class so i could track almost changes of value except when ValueWatcher is dereferencing. (i made these dereferencing operators because the program needs int *, &)
template <typename T>
class ValueWatcher
{
public:
ValueWatcher(const T &val)
{
cout << "constructor with raw value " << val << endl;
_cur = _old = val;
}
ValueWatcher(const ValueWatcher& vw)
{
cout << "constructor with ValueWatcher " << vw._cur << endl;
_cur = vw._cur;
}
ValueWatcher& operator=(const ValueWatcher &rhs)
{
cout << "operator= with ValueWatcher " << rhs._cur << endl;
_cur = rhs._cur;
onChanged();
return *this;
}
ValueWatcher& operator=(const T &val)
{
cout << "operator= with " << val << endl;
_cur = val;
onChanged();
return *this;
}
int *operator&()
{
cout << "addressing operator" << endl;
// can't track anymore!!!!!!!!!!!!!!!!!!!!!!!!!
return &_cur;
}
operator int&()
{
cout << "operator int&" << endl;
// can't track anymore!!!!!!!!!!!!!!!!!!!!!!!!!
return _cur;
}
operator int&() const
{
cout << "const operator int&" << endl;
return _cur;
}
operator int() const
{
cout << "operator int" << endl;
return _cur;
}
private:
void onChanged()
{
// update old and do proper action
}
T _cur;
T _old;
};
the problem is, when client code wants int & or int * of ValueWatcher, - it can gives int & or int * anyway but - int * or & cannot hold ValueWatcher instance, so can't tracking anymore.
is there anyway to solve this? i think it can be solved by returning reference or pointer class instance instead of just returning & or * of built-int type. but i don't know how to do that.
in addition-
i can't run this program with debugger. the problem occurs only in REAL environment and very hard to reproduce.
If you can reproduce the behavior when running in a debugger, you should be able to set a value change or memory change breakpoint. This is probably easier than introducing a proxy implementation.
Its probably not the best solution, but what if your * or & return a pointer/reference to your value watcher? Otherwise I would forbid the use of * or &. (By not implementing it or making it private).
I don't think this is possible. Once you return an int* or int&, you've lost the ability to track anything. The only way (and the correct way, IMO) to do it that I can think of is to use a debugger and set a watch point with an appropriate condition. When the condition is met the debugger will interrupt and halt the program so you can inspect the memory, call stack, etc.
If you can spare a PAGE_SIZE bytes for your variable, then you can lock this part of memory using VirtualProtect (if you're on windows) - you can set read only access, for example. After that, anything that tries to access that variable will crash the program (so you'll be able to write memory dump and pinpoint routine that changes variable). I used this technique to pinpoint similar problem (multithreaded app, something was randomly overwriting memory blocks). If you can't debug machine immediately, try writing dumps using MiniDumpWriteDump . You will be able to debug memory dumps using WinDBG or Visual Studio.
If you're seriously desperate:
#define int ValueWatcher<int>
In a better scenario, you'd use
//typedef int intt;
typedef ValueWatcher<int> intt;
Then re-write all your code that wants an int and replace it. Replace int* with intt*. Replace int& with intt&.
You say you only see this problem when not debugging, so I'm guessing you have a bug which is obscure and can only be seen when building with optimizations. There are a couple possible explanations for this behavior:
You have a race condition somewhere
You didn't initialize a variable properly... so when building with optimizations you're values are initialized differently than when debugging.
You have a buffer overrun somewhere which is writing over one of your variables. Again, this could be something you only see when built with optimizations... When you build for debugging the compiler is going to leave extra space around variables on the stack... which acts as a cushion and can keep some bugs from revealing themselves.
Here is a relevant SO post which explains these issues in more detail:
Program only crashes as release build -- how to debug?