I have question about try...catch... usage.
I wanted to know how to make good use of try....catch... in situations like when a user inputs a wrong data and it needs to be corrected before program proceeds. In first code I cannot use continue without a loop and if I wanted to use loop I could simply use if...else... for error checking like second code. Would I really need try...catch... ?
int size;
try
{
cin >> size;
Stack stack(size); //create a Stack(int) object.
}
catch(InvalidStackSize e)
{
cerr << "Invalid size for stack.\n";
continue; //What should I do here to ensure a stack with right size is entered before proceeding with rest of the program?
}
In second code I use a loop statement with if...else... . If I always need to have loops to get the right user input, why use try...catch... ?
int size;
Stack stack;
do
{
cin >> size;
if( size == 0 )
continue;
else
{
Stack tmp(size);
stack = tmp;
break;
}
}while(true);
Exceptions are meant for exceptional behaviour not for error checking(compile time error). Exceptions happens in the middle of execution of an instruction and once exception occurs and if you are not handling those exception properly then next instruction will not be executed & process stops.
From the C++ Standard
Exception handling provides a way of transferring control and
information from a point in the execution of a thread to an exception
handler associated with a point previously passed by the execution.
So to handle those exceptions, C++ suggested exception handling using try, throw and catch.
For below code exception handling is not required, because any instruction of the code will not cause any exception or runtime error, to handle compile time error you can use if..else as you did.
do
{
cin >> size;
if( size == 0 )
continue;
else
{
Stack tmp(size);
stack = tmp;
break;
}
}while(true);
I'm trying to write my first simple game using C++ and Allegro 4.2.3, but I'm getting crashes that sometimes occur when I change the game-state. When a state is started it uses the 'new' operator to make it, and then uses 'delete' before switching to the next one. I'm not sure if I fully understand using the new and delete operators, though! Here's a selection of code:
enum //Game state
{
TITLE_SCREEN, CUTSCENE_1
};
int main(void)
{
const int screenwidth = 800;
const int screenheight = 600;
bool quit = false;
int state = TITLE_SCREEN;
int oldstate = -1;
int oldoldstate;
init(screenwidth, screenheight);
install_int(counter, (1000/FPS));
TitleState *Title;
Cutscene *Scene1;
srand (time(NULL));
while (!quit)
{
while(tick == 0)
{
rest(1);
}
while(tick > 0)
{
oldoldstate = oldstate;
oldstate = state;
switch(state)
{
case TITLE_SCREEN:
//If the last state is different to this one, create the state
if(oldoldstate != TITLE_SCREEN)
{
Title = new TitleState();
}
//Run the program in the state
Title->Play();
//Check the state to see if it has changed
state = Title->CheckState();
//If the state has changed, delete the current state
if(oldstate != state)
{
delete Title;
}
break;
case CUTSCENE_1:
if(oldoldstate != CUTSCENE_1)
{
Scene1 = new Cutscene(); //SOMETIMES CRASHES BEFORE HERE
}
Scene1->Play();
state = Scene1->CheckState();
if(oldstate != state)
{
delete Scene1;
}
break;
case EXIT:
quit = true;
break;
default:
allegro_message("Game state not found!");
exit(-1);
}
int oldtick = tick;
tick--;
if(oldtick <= tick)
break;
}
}
deinit();
return 0;
}
When the program crashes, VS2010 opens up thread.c to show where the error was:
static void _callthreadstart(void)
{
_ptiddata ptd; /* pointer to thread's _tiddata struct */
/* must always exist at this point */
ptd = _getptd();
/*
* Guard call to user code with a _try - _except statement to
* implement runtime errors and signal support
*/
__try
{
( (void(__CLRCALL_OR_CDECL *)(void *))(((_ptiddata)ptd)->_initaddr) )
( ((_ptiddata)ptd)->_initarg ); //ERROR HERE (Next statement to be executed)
_endthread();
}
__except ( _XcptFilter(GetExceptionCode(), GetExceptionInformation()) )
{
/*
* Should never reach here
*/
_exit( GetExceptionCode() );
} /* end of _try - _except */
}
I would greatly appreciate any help, because I'm not sure at all what the problem is.
Thanks for the help and suggestions, they are interesting and useful to know! However, I think the solution to my problem was to use a newer version of the Allegro library - I was told to use 4.2.3 for this but moving to 4.4.2 has removed this problem as far as I can tell!
Well new is to allocate memory from the heap, and delete is to release it.
You can not delete (release) the same memory twice. You can go through some tutorial about new/delete.
Though I have not gone through your code, but here are few things you can do,
1) run your code in gdb (to get a stacktrace of your crash, it will tell most of the things to you).
2) If it is because of new/delete, then may be it is because of double free, there are other reasons why new/delete may fail.
3) You can keep a count of number of times new/delete gets called, you can override new/delete and write your own new/delete routine.
4) If you can not do anything of the above, then simply write printf statements to get to know where it is failing. printf may not be always reliable, you can use fflush().
This is a tiny fraction of the code of the program and it's hard to tell what it could be wrong.
This happens because C++ has the concept of "undefined behavior" and when you do something wrong (like for example deleting the same object twice) you cannot rely on the fact that an error will be raised immediately, but what happens is instead that something else, even unrelated, may behave strangely later (even millions of instructions executed later).
One of the basic assumptions of the language is that programmer don't do this kind of error (go figure!).
It's perfectly possible that you made a mistake in other parts of the code and you see the error in that delete call, that is however just the "victim" of what happened, not the point where the problem is.
There are some general rules that can help preventing allocation/deallocation errors (like using standard containers instead of naked pointers and follow the rule of the "big three").
Every time you write an instruction in a C++ program you should think a lot, because if you let a logical bug in it will be very very hard to remove it later. This is less of an issue in languages where "undefined behavior" is not present at language level.
I'm looking at error testing and reporting techniques from function calls, especially when multiple functions are called. As an example of what I mean, for simplicity each function returns a bool:
success = false;
if (fnOne ())
{
if (fnTwo ())
{
if (fnThree ( ))
{
success = true;
}
else
{
cout << "fnThree failed" <<endl;
}
}
else
{
cout << "fnTwo failed" <<endl;
}
}
else
{
cout << "fnOne failed" <<endl;
}
I find with the above example (which I see everywhere) the code quickly becomes unreadable, especially when it calling code becomes multi-screen in height.
Currently my way of dealing with this in C++ (Including 'c' tag in case someone has a C technique which is smooth) I store a bool and a string in my object. The bool represents success/fail and the string represents a reason for the fail state. I call a function and if the function fails, the function internally sets the object into fail state and provides a string based reason. I'm still not 100% happy with this method... but its the best I have so far. Example of what it looks like:
void myobj::fnOne (void)
{
if (m_fluxCapacitorProngCount > 3)
{
setState (false, "myobj::fnOne - Flux capacitor has been breeding again");
}
}
void myobj::fnTwo (void)
{
if (m_answerToLifeUniverseAndEverything != 42)
{
setState (false, "myobj::fnTwo - Probability drive enabled?");
}
}
void myobj::setup (void)
{
// Ensure time travel is possible
if (valid())
{
fnOne ();
}
// Ensure the universe has not changed
if (valid())
{
fnTwo ();
}
// Error? show the reason
if (valid() == false)
{
cout << getStateReason () << end;
}
}
Where valid () returns true/false and getStateReason () returns the string provided in the function when the error occured.
I like that this grows without the need to nest the conditions, to me I find this more readable but I'm sure there are problems...
What is the best [cleanest] way to handle detecting and reporting multiple function call return conditions?
This code should be clearer than your first variant:
if (!fnOne ())
{
cout << "fnOne failed" <<endl;
return;
}
if (!fnTwo ())
{
cout << "fnTwo failed" <<endl;
return;
}
if (!fnThree ())
{
cout << "fnThree failed" <<endl;
return;
}
success = true;
In general, for C++ you can use exceptions for error handling.
If you really want one function to return a value that represents the success/failure of several other functions (and just that - not a generalized return value from each function, which would require some way of returning an array/tuple/vector of values), here's one approach:
int bigFunction()
{ int return_value = 0;
if (function1() != 0)
return_value |= (1 << 0);
if (function2() != 0)
return_value |= (1 << 1);
if (function3() != 0)
return_value |= (1 << 2);
// ....
return return_value;
}
The idea is to allocate one bit each in the return value to indicate success/failure of each sub-function. If your sub-functions have a small set of possible return values that you actually want to capture, you could use more than one bit per function - i.e. two bits would allow you four different values for that field.
On the other hand, something like this means you're probably either a) writing some pretty low-level code like a device driver or kernel or something or b) there is probably a better approach to solving the problem at hand.
Dealing with errors in your code (bugs) and errors arising out of user input is a huge topic on its own. The technique you employ depends on the complexity of your code and the expected life of the code. The error handling strategy you would employ for a homework project is less complex than the error handling strategy you would employ for a semester project, which will be less complex than the error handling strategy you would employ for an in-house project, which will be less complex than a project which will be widely distributed to clients.
Strategy 1: Write an error message and abort
The simplest error handling strategy, that you can employ in homework project, is write a message out to stdout and and then call abort().
void fun1(int in)
{
if (in < 0 )
{
printf("Can't work with a negative number.\n");
abort();
}
// Rest of the function.
}
Strategy 2: Set a global error code and return
The next level of error handling involves detecting a bad input and dealing with it without calling abort(). You could set a globally accessible error code to indicate the type of error. I would recommend using this approach for homework projects, semester projects, and projects that are exploratory in nature.
void fun2(int in)
{
if (in < 0 )
{
// Indicate that "fun2" came accross a NEGATIVE_INTEGER_ERROR.
setErrorCode(NEGATIVE_INTEGER_ERROR, "fun2");
return;
}
// Rest of the function.
}
void funUser(int in)
{
// Call fun2
fun2(in);
// If fun2 had any errors, deal with it.
if (checkErrorCode())
{
return;
}
// Rest of the function.
}
The next level of error handling involves detecting a bad input and dealing with it using other options. You could return an error code from the function. If you are using C++, you could throw an exception. Both these options are valid ways of dealing with large projects --- be they in-house or distributed for wider consumption. They are applicable to any project in which the user base is beyond the team of developers.
Strategy 3: Return an error code from the function
int fun3(int in)
{
if (in < 0 )
{
// Indicate that "fun3" came accross a NEGATIVE_INTEGER_ERROR.
return NEGATIVE_INTEGER_ERROR;
}
// Rest of the function.
}
void funUser(int in)
{
// Call fun3
int ecode = fun3(in);
// If fun3 had any errors, deal with it.
if (ecode)
{
return;
}
// Rest of the function.
}
Strategy 4: Throw an error code from the function (C++)
void fun4(int in)
{
if (in < 0 )
{
// Indicate that "fun4" came accross a NEGATIVE_INTEGER_ERROR.
throw NEGATIVE_INTEGER_ERROR;
}
// Rest of the function.
}
void funUser(int in)
{
// Call fun4. Be prepared to deal with the exception or let it be
// dealt with another function higher up in the call stack.
// It makes sense to catch the exception only if this function do
// something useful with it.
fun4(in);
// Rest of the function.
}
Hope this gives you enough background to adopt an appropriate error handling strategy for your project.
I have a (Class member) function which I wish to avoid app crash due to ambiguity. For that purpose I have added a try catch bock as shown below:
void getGene(unsigned int position){
T val;
try {
val = _genome.at(_isCircular ? position % _genome.size() : position);
}
catch (std::exception& e) {
std::cerr << "Error in [" << __PRETTY_FUNCTION__ << "]: "
<< e.what() << std::endl;
exit(1);
}
return val;
}
Now, I wish to add a Boost unit test, which I thought of doing something like
BOOST_AUTO_TEST_CASE(nonCircularGenome_test){
// set size to 10
test.setSize(10);
// set non circular
test.setNonCircular();
// gene at site # 12 does not exist in a 10-site long genome, must throw an exception
BOOST_CHECK_THROW(test.getGene(12), std::out_of_range);
The problem is, I can't get both these things work. The try-catch block works well in release setup. However, this test works, only if I remove the try-catch block and let the function throw the exception.
What is the best way to get both these things working, so that a user is prompted with correct error on the go, while tests check explicitly on debug? One way is the use #ifdef/#endif DEBUG blocks, but I wish to avoid pre-processor macros.
Thanks in advance,
Nikhil
You seem to be misunderstanding the scope and purpose of exceptions - and perhaps of error handling in general.
First of all, you should define what are the pre-conditions of your function: does getGene() always expect position to be a valid one? Does it expect its clients to never provide invalid positions?
If that is the case, a client that provides an invalid position (even if the client is a test routine) is breaking the contract with the getGene() (in particular, it is breaking its pre-condition), and breaking a contract is undefined behavior by definition. You cannot test undefined behavior, so you should remove your test.
On the other hand, if your function has a wide contract, i.e. it allows the clients to pass in any position (even invalid ones) and (a) throws an exception or (b) returns an error code to communicate failure when the position is invalid, then the exit(1) line should not be there, because you are quitting the program and control is not transferred back to the caller.
One possibility is to re-throw the exception after logging the diagnostic:
T getGene(unsigned int position){
T val;
try {
val = _genome.at(_isCircular ? position % _genome.size() : position);
}
catch (std::exception& e) {
std::cerr << "Error in [" << __PRETTY_FUNCTION__ << "]: "
<< e.what() << std::endl;
throw;
// ^^^^^
}
return val;
}
And if you do not need to print a diagnostic, just let the exception naturally propagate:
T getGene(unsigned int position){
return _genome.at(_isCircular ? position % _genome.size() : position);
}
A while back I switched the way I handled c style errors.
I found a lot of my code looked like this:
int errorCode = 0;
errorCode = doSomething();
if (errorCode == 0)
{
errorCode = doSomethingElse();
}
...
if (errorCode == 0)
{
errorCode = doSomethingElseNew();
}
But recently I've been writing it like this:
int errorCode = 0;
do
{
if (doSomething() != 0) break;
if (doSomethingElse() != 0) break;
...
if (doSomethingElseNew() != 0) break;
} while(false);
I've seen a lot of code where nothing gets executed after there's an error, but it has always been written in the first style. Is there anyone else who uses this style, and if you don't, why?
Edit: just to clarify, usually this construct uses errno otherwise I will assign the value to an int before breaking. Also there's usually more code than just a single function call within the if (error == 0 ) clauses. Lots of good points to think on, though.
If you're using C++, just use exceptions. If you're using C, the first style works great. But if you really do want the second style, just use gotos - this is exactly the type of situation where gotos really are the clearest construct.
int errorCode = 0;
if ((errorCode = doSomething()) != 0) goto errorHandler;
if ((errorCode = doSomethingElse()) != 0) goto errorHandler;
...
if ((errorCode = doSomethingElseNew()) != 0) goto errorHandler;
return;
errorHandler:
// handle error
Yes gotos can be bad, and exceptions, or explicit error handling after each call may be better, but gotos are much better than co-opting another construct to try and simulate them poorly. Using gotos also makes it trivial to add another error handler for a specific error:
int errorCode = 0;
if ((errorCode = doSomething()) != 0) goto errorHandler;
if ((errorCode = doSomethingElse()) != 0) goto errorHandler;
...
if ((errorCode = doSomethingElseNew()) != 0) goto errorHandlerSomethingElseNew;
return;
errorHandler:
// handle error
return;
errorHandlerSomethingElseNew:
// handle error
return;
Or if the error handling is more of the "unrolling/cleaning up what you've done" variety, you can structure it like this:
int errorCode = 0;
if ((errorCode = doSomething()) != 0) goto errorHandler;
if ((errorCode = doSomethingElse()) != 0) goto errorHandler1;
...
if ((errorCode = doSomethingElseNew()) != 0) goto errorHandler2;
errorHandler2:
// clean up after doSomethingElseNew
errorHandler1:
// clean up after doSomethingElse
errorHandler:
// clean up after doSomething
return errorCode;
This idiom gives you the advantage of not repeating your cleanup code (of course, if you're using C++, RAII will cover the cleanup code even more cleanly.
The second snippet just looks wrong. You're effectively re-invented goto.
Anyone reading the first code style will immediately know what's happening, the second style requires more examination, thus makes maintenance harder in the long run, for no real benefit.
Edit, in the second style, you've thrown away the error code, so you can't take any corrective action or display an informative message, log something useful etc....
The first style is a pattern the experienced eye groks at once.
The second requires more thought - you look at it and see a loop. You expect several iterations, but as you read through it, this mental model gets shattered...
Sure, it may work, but programming languages aren't just a way to tell a computer what to do, they are a way to communicate those ideas to other humans too.
I think the first one gives you more control over what to do with a particular error. The second way only tells you that an error occurred, not where or what it was.
Of course, exceptions are superior to both...
Make it short, compact, and easy to quickly read?
How about:
if ((errorcode = doSomething()) == 0
&& (errorcode = doSomethingElse()) == 0
&& (errorcode = doSomethingElseNew()) == 0)
maybe_something_here;
return errorcode; // or whatever is next
Why not replace the do/while and break with a function and returns instead?
You have reinvented goto.
What about using exceptions?
try {
DoSomeThing();
DoSomethingElse();
DoSomethingNew();
.
.
.
}
catch(DoSomethingException e) {
.
.
}
catch(DoSomethingElseException e) {
.
.
}
catch(DoSomethingNewException e) {
.
.
}
catch(...) {
.
.
}
Your method isn't really bad and it's not unreadable like people here are claiming, but it is unconventional and will annoy some (as you noticed here).
The first one can get REALLY annoying after your code gets to a certain size because it has a lot of boilerplate.
The pattern I tended to use when I couldn't use exceptions was more like:
fn() {
while(true) {
if(doIt())
handleError();//error detected...
}
}
bool doIt() {
if(!doThing1Succeeds())
return true;
if(!doThing2Succeeds())
return true;
return false;
}
Your second function should be inlined into the first if you put the correct magic incantations in the signature, and each function should be more readable.
This is functionally identical to the while/bail loop without the unconventional syntax (and also a bit easier to understand because you separate out the concerns of looping/error handling from the concerns of "what does your program do in a given loop".
This should be done through exceptions, at least if the C++ tag is correct. There is nothing wrong if you are using C only, although I suggest to use a Boolean instead as you are not using the returned error code. You don't have to type != 0 either then...
I've used the technique in a few places (so you aren't the only one who does it). However, I don't do it as a general rule, and I have mixed feelings about it where I have used it. Used with careful documentation (comments) in a few places, I'm OK with it. Used everywhere - no, generally not a good idea.
Relevant exhibits: files sqlstmt.ec, upload.ec, reload.ec from SQLCMD source code (not, not Microsoft's impostor; mine). The '.ec' extension means that the file contains ESQL/C - Embedded SQL in C which is pre-processed to plain C; you don't need to know ESQL/C to see the loop structures. The loops are all labelled with:
/* This is a one-cycle loop that simplifies error handling */
The classic C idiom is:
if( (error_val = doSomething()) == 0)
{
//Manage error condition
}
Note that C returns the assigned value from an assignment, enabling a test to be performed. Often people will write:
if( ! ( error_val = doSomething()))
but I retained the == 0 for clarity.
Regarding your idioms...
Your first idiom is ok. Your second idiom is an abuse of the language and you should avoid it.
How about this version then
I'd usually just do something like your first example or possibly with a boolean like this:
bool statusOkay = true;
if (statusOkay)
statusOkay = (doSomething() == 0);
if (statusOkay)
statusOkay = (doSomethingElse() == 0);
if (statusOkay)
statusOkay = (doSomethingElseNew() == 0);
But if you are really keen on the terseness of your second technique then you could consider this approach:
bool statusOkay = true;
statusOkay = statusOkay && (doSomething() == 0);
statusOkay = statusOkay && (doSomethingElse() == 0);
statusOkay = statusOkay && (doSomethingElseNew() == 0);
Just don't expect the maintenance programmers to thank you!
I use the do { } while (false); every once in a while when it seems appropriate. I see it as being something like a try/catch block in that I have code that is setup as a block with a series of decisions with possible exceptions and the need is to have the various paths through the rules and logic to merge at the end of the block.
I am pretty sure I only use this construct with C programming and it is not very often.
With the specific example you gave of a series of function calls that will be performed one after the other with the complete series being done or the series stopped if an error is detected, I would probably just use if statements checking an error variable.
{
int iCallStatus = 0;
iCallStatus = doFunc1();
if (iCallStatus == 0) iCallStatus = doFunc2();
if (iCallStatus == 0) icallStatus = doFunc3();
}
This is short and the meaning is straightforward and clear even without comments.
What I have run into from time to time is that this fairly straightforward sequential flow of procedural steps does not apply to a particular requirement. What I need is to create a code block with various decisions, usually involving loops or iterating over some series of data objects and I want to treat this series as a kind of transaction in which the transaction will be committed if there is no error or aborted if there is some kind of an error condition found during the processing of the transaction. As part of this data block, I may create a set of temporary variables for the scope of the do { } while (false); When ever I use this, I always put a comment indicating that this is a single iteration do while, something like:
do { // single loop code block begins
// block of statements for business logic with single ending point
} while (false); // single loop code block ends
When ever I find myself thinking this construct is necessary, I look to see if the code needs to be refactored or if a function or set of functions would be more appropriate.
The reason I prefer this construct over using a goto statement is that the use of brackets and indentation makes the source code easier to read. With my editor I can find the top and bottom of the block easily and the indentation makes it easier to visualize the code as a block with a single entry point and a known ending point. There may be multiple exit points within the block however I do know where they will all end up. Using this means that I can create localized variables that will go out of scope though just using brackets without the do { } while (false); does that as well. However I use the do while because I need the break; capability as well.
I would consider using this style under some of the following conditions. If the business logic that is being implemented requires a set of variables that are shared and referenced by different possible execution pathways which rejoin. If the business logic is complex with multiple states and checks with several levels of if statements and if an error is detected during the processing, an indication to the error is set and the processing is aborted.
The only times that I can think of when I have used this is with something a bit gnarly and this helped to clarify and make the processing abort easier. So basically I was using this similar to throwing an exception with a try/catch.
The first style is a good example of why exceptions are superior: You can't even see the algorithm because it's buried under explicit error handling.
The second style abuses a loop construct to mimic goto in a mislead attempt to avoid having to explicitly spell out goto. It's plainly evil and long-time usage will lead you to the dark side.
For me, I'd prefer:
if(!doSomething()) {
doSomethingElse();
}
doSomethingNew();
All of the other stuff is syntactic noise that is obscuring the three function calls. Inside of Else and New you can throw an error, or if older, use longjmp to go back to some earlier handling. Nice, clean and rather obvious.
There seems to be a deeper problem here than your control constructs. Why do you have such complex error control? I.e. you seem to have multiple ways of handling different errors.
Typically, when I get an error, I simply break off the operation, present an error message to the user, and return control to the event loop, if an interactive application. For batch, log the error, and either continue processing on the next data item or abort the processing.
This kind of control flow is easily handled with exceptions.
If you have to handle error numbers, then you can effectively simulate exceptions by continuing normal error processing in case of an error, or returning the first error. Your continued processing after an error occurs seems to be very fragile, or your error conditions are really control conditions not error conditions.
Honestly the more effective C/C++ programmers I've known would just use a gotos in such conditions. The general approach is to have a single exit label with all cleanup after it. Have only one return path from the function. When the cleanup logic starts to get complicated/have conditionals, then break the function into subfunctions. This is pretty typical for systems coding in C/C++ imo, where the APIs you call return error codes rather than throw exceptions.
In general, gotos are bad. Since the usage I've described is so common, done consistently I think its fine.
The second style is commonly used for managing resource allocations and de-allocations in C, where RAII doesn't come to the rescue.
Typically, you would declare some resources before the do, allocate and use them inside the pseudo-loop, and de-allocate them outside.
An example of the general paradigm is as follows:
int status = 0;
// declare and initialize resources
BYTE *memory = NULL;
HANDLE file = INVALID_HANDLE_VALUE;
// etc...
do
{
// do some work
// allocate some resources
file = CreateFile(...);
if(file == INVALID_HANDLE_VALUE)
{
status = GetLastError();
break;
}
// do some work with new resources
// allocate more resources
memory = malloc(...);
if(memory == NULL)
{
status = ERROR_OUTOFMEMORY;
break;
}
// do more work with new resources
} while(0);
// clean up the declared resources
if(file != INVALID_HANDLE_VALUE)
CloseHandle(file);
if(memory != NULL)
free(memory);
return status;
Having said that, RAII solves the same problem with much cleaner syntax (basically, you can forget the cleanup code altogether) and handles some scenarios that this approach does not, such as exceptions.
I've seen this pattern before and didn't like it. Usually, it could be cleaned up by pulling the logic into a separate function.
The code then becomes
...
int errorCode = doItAll();
...
int doItAll(void) {
int errorCode;
if(errorCode=doSomething()!=0)
return errorCode;
if(errorCode=doSomethingElse()!=0)
return errorCode;
if(errorCode=doSomethingElseNew()!=0)
return errorCode;
return 0;
}
Combining this with cleanup becomes pretty easy too, you just use a goto and error handler like in eclipses answer.
...
int errorCode = doItAll();
...
int doItAll(void) {
int errorCode;
void * aResource = NULL; // Somthing that needs cleanup after doSomethingElse has been called
if(errorCode=doSomething()!=0) //Doesn't need cleanup
return errorCode;
if(errorCode=doSomethingElse(&aResource)!=0)
goto cleanup;
if(errorCode=doSomethingElseNew()!=0)
goto cleanup;
return 0;
cleanup:
releaseResource(aResource);
return errorCode;
}
I use the second approach when I am managing allot of pointers and when the function is acceptable to fail that throwing an exception is not really the correct answer.
I find it is easier to manage cleanup of pointers in one place instead of in several places and also you know that it is only going to return in one place.
pointerA * pa = NULL;
pointerB * pb = NULL;
pointerB * pc = NULL;
BOOL bRet = FALSE;
pa = new pointerA();
do {
if (!dosomethingWithPA( pa ))
break;
pb = new poninterB();
if(!dosomethingWithPB( pb ))
break;
pc = new pointerC();
if(!dosemethingWithPC( pc ))
break;
bRet = TRUE;
} while (FALSE);
//
// cleanup
//
if (NULL != pa)
delete pa;
if (NULL != pb)
delete pb;
if (NULL != pc)
delete pc;
return bRet;
in contrast with
pointerA * pa = NULL;
pointerB * pb = NULL;
pointerB * pc = NULL;
pa = new pointerA();
if (!dosomethingWithPA( pa )) {
delete pa;
return FALSE;
}
pb = new poninterB();
if(!dosomethingWithPB( pb )) {
delete pa;
delete pb;
return FALSE;
}
pc = new pointerC();
if(!dosemethingWithPAPBPC( pa,pb,pc )) {
delete pa;
delete pb;
delete pc;
return FALSE;
}
delete pa;
delete pb;
delete pc;
return TRUE;