I'm trying to write my first simple game using C++ and Allegro 4.2.3, but I'm getting crashes that sometimes occur when I change the game-state. When a state is started it uses the 'new' operator to make it, and then uses 'delete' before switching to the next one. I'm not sure if I fully understand using the new and delete operators, though! Here's a selection of code:
enum //Game state
{
TITLE_SCREEN, CUTSCENE_1
};
int main(void)
{
const int screenwidth = 800;
const int screenheight = 600;
bool quit = false;
int state = TITLE_SCREEN;
int oldstate = -1;
int oldoldstate;
init(screenwidth, screenheight);
install_int(counter, (1000/FPS));
TitleState *Title;
Cutscene *Scene1;
srand (time(NULL));
while (!quit)
{
while(tick == 0)
{
rest(1);
}
while(tick > 0)
{
oldoldstate = oldstate;
oldstate = state;
switch(state)
{
case TITLE_SCREEN:
//If the last state is different to this one, create the state
if(oldoldstate != TITLE_SCREEN)
{
Title = new TitleState();
}
//Run the program in the state
Title->Play();
//Check the state to see if it has changed
state = Title->CheckState();
//If the state has changed, delete the current state
if(oldstate != state)
{
delete Title;
}
break;
case CUTSCENE_1:
if(oldoldstate != CUTSCENE_1)
{
Scene1 = new Cutscene(); //SOMETIMES CRASHES BEFORE HERE
}
Scene1->Play();
state = Scene1->CheckState();
if(oldstate != state)
{
delete Scene1;
}
break;
case EXIT:
quit = true;
break;
default:
allegro_message("Game state not found!");
exit(-1);
}
int oldtick = tick;
tick--;
if(oldtick <= tick)
break;
}
}
deinit();
return 0;
}
When the program crashes, VS2010 opens up thread.c to show where the error was:
static void _callthreadstart(void)
{
_ptiddata ptd; /* pointer to thread's _tiddata struct */
/* must always exist at this point */
ptd = _getptd();
/*
* Guard call to user code with a _try - _except statement to
* implement runtime errors and signal support
*/
__try
{
( (void(__CLRCALL_OR_CDECL *)(void *))(((_ptiddata)ptd)->_initaddr) )
( ((_ptiddata)ptd)->_initarg ); //ERROR HERE (Next statement to be executed)
_endthread();
}
__except ( _XcptFilter(GetExceptionCode(), GetExceptionInformation()) )
{
/*
* Should never reach here
*/
_exit( GetExceptionCode() );
} /* end of _try - _except */
}
I would greatly appreciate any help, because I'm not sure at all what the problem is.
Thanks for the help and suggestions, they are interesting and useful to know! However, I think the solution to my problem was to use a newer version of the Allegro library - I was told to use 4.2.3 for this but moving to 4.4.2 has removed this problem as far as I can tell!
Well new is to allocate memory from the heap, and delete is to release it.
You can not delete (release) the same memory twice. You can go through some tutorial about new/delete.
Though I have not gone through your code, but here are few things you can do,
1) run your code in gdb (to get a stacktrace of your crash, it will tell most of the things to you).
2) If it is because of new/delete, then may be it is because of double free, there are other reasons why new/delete may fail.
3) You can keep a count of number of times new/delete gets called, you can override new/delete and write your own new/delete routine.
4) If you can not do anything of the above, then simply write printf statements to get to know where it is failing. printf may not be always reliable, you can use fflush().
This is a tiny fraction of the code of the program and it's hard to tell what it could be wrong.
This happens because C++ has the concept of "undefined behavior" and when you do something wrong (like for example deleting the same object twice) you cannot rely on the fact that an error will be raised immediately, but what happens is instead that something else, even unrelated, may behave strangely later (even millions of instructions executed later).
One of the basic assumptions of the language is that programmer don't do this kind of error (go figure!).
It's perfectly possible that you made a mistake in other parts of the code and you see the error in that delete call, that is however just the "victim" of what happened, not the point where the problem is.
There are some general rules that can help preventing allocation/deallocation errors (like using standard containers instead of naked pointers and follow the rule of the "big three").
Every time you write an instruction in a C++ program you should think a lot, because if you let a logical bug in it will be very very hard to remove it later. This is less of an issue in languages where "undefined behavior" is not present at language level.
i'm doing some OpenCL programming and at one location in my code I get strange error.
a and a_end are pointers to local memory
if (a+POS<=a_end) {
max = ....
} else {
max = *(a_end-1);
}
In my case "else" isn't reached in the current loop. However, the application crashes with -5 CL_OUT_OF_RESOURCES if the line is part of the code.
If I comment the line the program works well. This is very strange.
Do you have any suggestions?
Regards,
Chris
Edit: Some more code
Values of a, a_end and POS1 before it crashes:
a: 3298304
a_end: 3311264
POS1: 34
border=b-b_end; //TODO: Check if all dummy elements are removed in this case
if(POS1<border && a+POS1<a_end) {
s_data[POS1+s_maxes[2]-border+1]=a[POS1];
s_ids[POS1+s_maxes[2]-border+1] = a_pos+POS1;
}
if(POS1+1==border) {
debug[0] = a+POS1;
debug[1] = a_end;
s_maxes[1]=*(b_end-1);
if(a+POS1<=a_end) {
s_maxes[0]=s_data[s_maxes[2]];
} else {
s_maxes[0]=*(a_end-1); //Here is the line where it crashes
}
}
if(POS2<border && a+POS2<a_end) {
s_data[POS2+s_maxes[2]-border+1]=a[POS2];
a_pos+POS2;
}
if(POS2+1==border) {
s_maxes[1]=*(b_end-1);
if(a+POS2<=a_end) {
s_maxes[0]=s_data[s_maxes[2]];
} else {
s_maxes[0]=*(a_end-1);
}
}
a+=border;a_pos+=border;
There is a good chance that the following scenario happens: before your if the value of a_end is corrupted, highly possibly it gets initialized to 0 (without further knowledge of the code this is my best shot, but it also might be a value which is smaller than a + POS) and then obviously the else branch gets executed which tries to de-reference the value found at address 0 - 1 which is a pretty big number and then the application crashes. Obviously if you remove the else branch this code is not executed.
Hint: Put some printouts for the value of a_end.
I looked at a few examples and as far as I can tell I'm using the splice method correctly. However, when I run the program in debug mode under Visual C++, the call stack shows my call to splice eventually reaches this line where it crashes:
_Mynextiter = _Parent_proxy->_Myfirstiter;
The code:
for(std::list<Rect>::iterator i = rects.begin(); i != rects.end();)
{
if(i->isOverlapping(newRect))
{
Rect oldRect = (*i);
i = rects.erase(i);
std::list<Rect> & lr = oldRect.split(newRect);
//doesn't work either
//rects.splice(rects.begin(), lr, lr.begin(), lr.end());
rects.splice(rects.begin(), lr);
}
else i++;
}
It seems you are using std::list<T>::splice() the right way, if I refer to any manual pages on the internet.
At the contrary, I don't really like the reference in the line:
std::list<Rect> & lr = oldRect.split(newRect);
I don't know what is doing Rect::split() but you should try to copy its result. Does that help?
Valgrind detects an invalid read error I don't know how to fix or to be more precise: I don't know what the problem is.
Invalid read of size 8
at 0x443212: std::vector<Tile*, std::allocator<Tile*> >::end() const
by 0x44296C: Collection<Tile*>::Iterator::operator++()
The Iterator class is very simple (and actually a somewhat bad piece of programming) but sufficient for my needs right now. I think there are three methods you should know to hopefully help find my problem:
Iterator(size_t x, size_t y, const TileCollection& tiles)
: mTiles(&tiles)
, mX(mTiles->begin())
, mY(mTiles->at(x).begin())
{
std::advance(mX, x);
std::advance(mY, y);
bool foundFirst = false;
while (!foundFirst)
{
while (mY != mX->end() && *mY == 0) ++mY;
if (mY != mX->end()) foundFirst = true;
else
{
++mX;
if (mX != mTiles->end()) mY = mX->begin();
}
}
}
Iterator Iterator::operator++()
{
bool foundNext = false;
++mY;
while (!foundNext)
{
while (mY != mX->end() && *mY == 0) ++mY;
if (mY != mX->end()) foundNext = true;
else
{
++mX;
if (mX != mTiles->end()) mY = mX->begin();
}
}
return *this;
}
void TileCollection::add(Tile* tile)
{
Point2D p(tile->getPosition());
std::vector<Tile*> tmp(1, (Tile*)0);
if ((size_t)p.x >= mTiles.size())
mTiles.resize(p.x + 1, tmp);
if ((size_t)p.y >= mTiles.at(p.x).size())
mTiles.at(p.x).resize(p.y + 1, (Tile*)0);
mTiles.at(p.x).at(p.y) = tile;
++mNumTiles;
}
The actual code that is causing the valgrind error is the line:
while (mY != mX->end() && *mY == 0) ++mY;
...of the Iterator::operator++ method.
It looks to me that, at the least, the following line in operator++
if (mX != mTiles->end()) mY = mX->begin();
is lacking a suitable else-clause.
Consider what happens when mX actually reaches mTiles->end(): You will enter a new iteration of the outer while loop; the first line in that loop (the line that causes the Valgrind error) will evaluate mX->end() and thus attempt to dereference mX -- but mX is mTiles->end(), and it's not correct to dereference the end iterator of a collection since it doesn't actually reference an element of the collection. It looks to me as if this may be the cause of your Valgrind error.
(Note that the constructor contains essentially the same code.)
More generally, I think you need to think about how you handle reaching the end of your two-dimensional array. How does the client of your Iterator check whether it has reached the end of the iteration? How do you expect your operator++ to handle the case when it reaches the end of the two-dimensional array? Should it protect itself against getting called too often?
You can try to split up the statement in order get find out where the error occurs:
while (mY != mX->end()) // maybe here
{
if (*mY != 0) // maybe here
{
break;
}
++mY; // maybe here
}
Compiling with GCC compiler option -fno-inline helps to get a nicer stack-trace, which can help you to trace the error. It will also make your program very slow, so don't forget to remove it later.
A while back I switched the way I handled c style errors.
I found a lot of my code looked like this:
int errorCode = 0;
errorCode = doSomething();
if (errorCode == 0)
{
errorCode = doSomethingElse();
}
...
if (errorCode == 0)
{
errorCode = doSomethingElseNew();
}
But recently I've been writing it like this:
int errorCode = 0;
do
{
if (doSomething() != 0) break;
if (doSomethingElse() != 0) break;
...
if (doSomethingElseNew() != 0) break;
} while(false);
I've seen a lot of code where nothing gets executed after there's an error, but it has always been written in the first style. Is there anyone else who uses this style, and if you don't, why?
Edit: just to clarify, usually this construct uses errno otherwise I will assign the value to an int before breaking. Also there's usually more code than just a single function call within the if (error == 0 ) clauses. Lots of good points to think on, though.
If you're using C++, just use exceptions. If you're using C, the first style works great. But if you really do want the second style, just use gotos - this is exactly the type of situation where gotos really are the clearest construct.
int errorCode = 0;
if ((errorCode = doSomething()) != 0) goto errorHandler;
if ((errorCode = doSomethingElse()) != 0) goto errorHandler;
...
if ((errorCode = doSomethingElseNew()) != 0) goto errorHandler;
return;
errorHandler:
// handle error
Yes gotos can be bad, and exceptions, or explicit error handling after each call may be better, but gotos are much better than co-opting another construct to try and simulate them poorly. Using gotos also makes it trivial to add another error handler for a specific error:
int errorCode = 0;
if ((errorCode = doSomething()) != 0) goto errorHandler;
if ((errorCode = doSomethingElse()) != 0) goto errorHandler;
...
if ((errorCode = doSomethingElseNew()) != 0) goto errorHandlerSomethingElseNew;
return;
errorHandler:
// handle error
return;
errorHandlerSomethingElseNew:
// handle error
return;
Or if the error handling is more of the "unrolling/cleaning up what you've done" variety, you can structure it like this:
int errorCode = 0;
if ((errorCode = doSomething()) != 0) goto errorHandler;
if ((errorCode = doSomethingElse()) != 0) goto errorHandler1;
...
if ((errorCode = doSomethingElseNew()) != 0) goto errorHandler2;
errorHandler2:
// clean up after doSomethingElseNew
errorHandler1:
// clean up after doSomethingElse
errorHandler:
// clean up after doSomething
return errorCode;
This idiom gives you the advantage of not repeating your cleanup code (of course, if you're using C++, RAII will cover the cleanup code even more cleanly.
The second snippet just looks wrong. You're effectively re-invented goto.
Anyone reading the first code style will immediately know what's happening, the second style requires more examination, thus makes maintenance harder in the long run, for no real benefit.
Edit, in the second style, you've thrown away the error code, so you can't take any corrective action or display an informative message, log something useful etc....
The first style is a pattern the experienced eye groks at once.
The second requires more thought - you look at it and see a loop. You expect several iterations, but as you read through it, this mental model gets shattered...
Sure, it may work, but programming languages aren't just a way to tell a computer what to do, they are a way to communicate those ideas to other humans too.
I think the first one gives you more control over what to do with a particular error. The second way only tells you that an error occurred, not where or what it was.
Of course, exceptions are superior to both...
Make it short, compact, and easy to quickly read?
How about:
if ((errorcode = doSomething()) == 0
&& (errorcode = doSomethingElse()) == 0
&& (errorcode = doSomethingElseNew()) == 0)
maybe_something_here;
return errorcode; // or whatever is next
Why not replace the do/while and break with a function and returns instead?
You have reinvented goto.
What about using exceptions?
try {
DoSomeThing();
DoSomethingElse();
DoSomethingNew();
.
.
.
}
catch(DoSomethingException e) {
.
.
}
catch(DoSomethingElseException e) {
.
.
}
catch(DoSomethingNewException e) {
.
.
}
catch(...) {
.
.
}
Your method isn't really bad and it's not unreadable like people here are claiming, but it is unconventional and will annoy some (as you noticed here).
The first one can get REALLY annoying after your code gets to a certain size because it has a lot of boilerplate.
The pattern I tended to use when I couldn't use exceptions was more like:
fn() {
while(true) {
if(doIt())
handleError();//error detected...
}
}
bool doIt() {
if(!doThing1Succeeds())
return true;
if(!doThing2Succeeds())
return true;
return false;
}
Your second function should be inlined into the first if you put the correct magic incantations in the signature, and each function should be more readable.
This is functionally identical to the while/bail loop without the unconventional syntax (and also a bit easier to understand because you separate out the concerns of looping/error handling from the concerns of "what does your program do in a given loop".
This should be done through exceptions, at least if the C++ tag is correct. There is nothing wrong if you are using C only, although I suggest to use a Boolean instead as you are not using the returned error code. You don't have to type != 0 either then...
I've used the technique in a few places (so you aren't the only one who does it). However, I don't do it as a general rule, and I have mixed feelings about it where I have used it. Used with careful documentation (comments) in a few places, I'm OK with it. Used everywhere - no, generally not a good idea.
Relevant exhibits: files sqlstmt.ec, upload.ec, reload.ec from SQLCMD source code (not, not Microsoft's impostor; mine). The '.ec' extension means that the file contains ESQL/C - Embedded SQL in C which is pre-processed to plain C; you don't need to know ESQL/C to see the loop structures. The loops are all labelled with:
/* This is a one-cycle loop that simplifies error handling */
The classic C idiom is:
if( (error_val = doSomething()) == 0)
{
//Manage error condition
}
Note that C returns the assigned value from an assignment, enabling a test to be performed. Often people will write:
if( ! ( error_val = doSomething()))
but I retained the == 0 for clarity.
Regarding your idioms...
Your first idiom is ok. Your second idiom is an abuse of the language and you should avoid it.
How about this version then
I'd usually just do something like your first example or possibly with a boolean like this:
bool statusOkay = true;
if (statusOkay)
statusOkay = (doSomething() == 0);
if (statusOkay)
statusOkay = (doSomethingElse() == 0);
if (statusOkay)
statusOkay = (doSomethingElseNew() == 0);
But if you are really keen on the terseness of your second technique then you could consider this approach:
bool statusOkay = true;
statusOkay = statusOkay && (doSomething() == 0);
statusOkay = statusOkay && (doSomethingElse() == 0);
statusOkay = statusOkay && (doSomethingElseNew() == 0);
Just don't expect the maintenance programmers to thank you!
I use the do { } while (false); every once in a while when it seems appropriate. I see it as being something like a try/catch block in that I have code that is setup as a block with a series of decisions with possible exceptions and the need is to have the various paths through the rules and logic to merge at the end of the block.
I am pretty sure I only use this construct with C programming and it is not very often.
With the specific example you gave of a series of function calls that will be performed one after the other with the complete series being done or the series stopped if an error is detected, I would probably just use if statements checking an error variable.
{
int iCallStatus = 0;
iCallStatus = doFunc1();
if (iCallStatus == 0) iCallStatus = doFunc2();
if (iCallStatus == 0) icallStatus = doFunc3();
}
This is short and the meaning is straightforward and clear even without comments.
What I have run into from time to time is that this fairly straightforward sequential flow of procedural steps does not apply to a particular requirement. What I need is to create a code block with various decisions, usually involving loops or iterating over some series of data objects and I want to treat this series as a kind of transaction in which the transaction will be committed if there is no error or aborted if there is some kind of an error condition found during the processing of the transaction. As part of this data block, I may create a set of temporary variables for the scope of the do { } while (false); When ever I use this, I always put a comment indicating that this is a single iteration do while, something like:
do { // single loop code block begins
// block of statements for business logic with single ending point
} while (false); // single loop code block ends
When ever I find myself thinking this construct is necessary, I look to see if the code needs to be refactored or if a function or set of functions would be more appropriate.
The reason I prefer this construct over using a goto statement is that the use of brackets and indentation makes the source code easier to read. With my editor I can find the top and bottom of the block easily and the indentation makes it easier to visualize the code as a block with a single entry point and a known ending point. There may be multiple exit points within the block however I do know where they will all end up. Using this means that I can create localized variables that will go out of scope though just using brackets without the do { } while (false); does that as well. However I use the do while because I need the break; capability as well.
I would consider using this style under some of the following conditions. If the business logic that is being implemented requires a set of variables that are shared and referenced by different possible execution pathways which rejoin. If the business logic is complex with multiple states and checks with several levels of if statements and if an error is detected during the processing, an indication to the error is set and the processing is aborted.
The only times that I can think of when I have used this is with something a bit gnarly and this helped to clarify and make the processing abort easier. So basically I was using this similar to throwing an exception with a try/catch.
The first style is a good example of why exceptions are superior: You can't even see the algorithm because it's buried under explicit error handling.
The second style abuses a loop construct to mimic goto in a mislead attempt to avoid having to explicitly spell out goto. It's plainly evil and long-time usage will lead you to the dark side.
For me, I'd prefer:
if(!doSomething()) {
doSomethingElse();
}
doSomethingNew();
All of the other stuff is syntactic noise that is obscuring the three function calls. Inside of Else and New you can throw an error, or if older, use longjmp to go back to some earlier handling. Nice, clean and rather obvious.
There seems to be a deeper problem here than your control constructs. Why do you have such complex error control? I.e. you seem to have multiple ways of handling different errors.
Typically, when I get an error, I simply break off the operation, present an error message to the user, and return control to the event loop, if an interactive application. For batch, log the error, and either continue processing on the next data item or abort the processing.
This kind of control flow is easily handled with exceptions.
If you have to handle error numbers, then you can effectively simulate exceptions by continuing normal error processing in case of an error, or returning the first error. Your continued processing after an error occurs seems to be very fragile, or your error conditions are really control conditions not error conditions.
Honestly the more effective C/C++ programmers I've known would just use a gotos in such conditions. The general approach is to have a single exit label with all cleanup after it. Have only one return path from the function. When the cleanup logic starts to get complicated/have conditionals, then break the function into subfunctions. This is pretty typical for systems coding in C/C++ imo, where the APIs you call return error codes rather than throw exceptions.
In general, gotos are bad. Since the usage I've described is so common, done consistently I think its fine.
The second style is commonly used for managing resource allocations and de-allocations in C, where RAII doesn't come to the rescue.
Typically, you would declare some resources before the do, allocate and use them inside the pseudo-loop, and de-allocate them outside.
An example of the general paradigm is as follows:
int status = 0;
// declare and initialize resources
BYTE *memory = NULL;
HANDLE file = INVALID_HANDLE_VALUE;
// etc...
do
{
// do some work
// allocate some resources
file = CreateFile(...);
if(file == INVALID_HANDLE_VALUE)
{
status = GetLastError();
break;
}
// do some work with new resources
// allocate more resources
memory = malloc(...);
if(memory == NULL)
{
status = ERROR_OUTOFMEMORY;
break;
}
// do more work with new resources
} while(0);
// clean up the declared resources
if(file != INVALID_HANDLE_VALUE)
CloseHandle(file);
if(memory != NULL)
free(memory);
return status;
Having said that, RAII solves the same problem with much cleaner syntax (basically, you can forget the cleanup code altogether) and handles some scenarios that this approach does not, such as exceptions.
I've seen this pattern before and didn't like it. Usually, it could be cleaned up by pulling the logic into a separate function.
The code then becomes
...
int errorCode = doItAll();
...
int doItAll(void) {
int errorCode;
if(errorCode=doSomething()!=0)
return errorCode;
if(errorCode=doSomethingElse()!=0)
return errorCode;
if(errorCode=doSomethingElseNew()!=0)
return errorCode;
return 0;
}
Combining this with cleanup becomes pretty easy too, you just use a goto and error handler like in eclipses answer.
...
int errorCode = doItAll();
...
int doItAll(void) {
int errorCode;
void * aResource = NULL; // Somthing that needs cleanup after doSomethingElse has been called
if(errorCode=doSomething()!=0) //Doesn't need cleanup
return errorCode;
if(errorCode=doSomethingElse(&aResource)!=0)
goto cleanup;
if(errorCode=doSomethingElseNew()!=0)
goto cleanup;
return 0;
cleanup:
releaseResource(aResource);
return errorCode;
}
I use the second approach when I am managing allot of pointers and when the function is acceptable to fail that throwing an exception is not really the correct answer.
I find it is easier to manage cleanup of pointers in one place instead of in several places and also you know that it is only going to return in one place.
pointerA * pa = NULL;
pointerB * pb = NULL;
pointerB * pc = NULL;
BOOL bRet = FALSE;
pa = new pointerA();
do {
if (!dosomethingWithPA( pa ))
break;
pb = new poninterB();
if(!dosomethingWithPB( pb ))
break;
pc = new pointerC();
if(!dosemethingWithPC( pc ))
break;
bRet = TRUE;
} while (FALSE);
//
// cleanup
//
if (NULL != pa)
delete pa;
if (NULL != pb)
delete pb;
if (NULL != pc)
delete pc;
return bRet;
in contrast with
pointerA * pa = NULL;
pointerB * pb = NULL;
pointerB * pc = NULL;
pa = new pointerA();
if (!dosomethingWithPA( pa )) {
delete pa;
return FALSE;
}
pb = new poninterB();
if(!dosomethingWithPB( pb )) {
delete pa;
delete pb;
return FALSE;
}
pc = new pointerC();
if(!dosemethingWithPAPBPC( pa,pb,pc )) {
delete pa;
delete pb;
delete pc;
return FALSE;
}
delete pa;
delete pb;
delete pc;
return TRUE;