Using STL containers with exception handling in low memory situation - c++

I am using STL containers in my code (being developed in C++ using Visual Studio 2010)
I have never used exception handling with STL containers before. Since STL containers throw bad_alloc exception, I plan to use like in sample code shown below. Let's assume function() gets called in low memory situation.
Now, I am not sure if it is full proof code OR do I need to do any additional cleanup activity.
class MyClass
{
std::vector<int>* integer_vector;
public:
MyClass()
{
std::string message;
message += "my message"; // bad_alloc could be thrown from here
integer_vector = new std::vector<int>; // bad_alloc could be thrown from here
}
};
void function()
{
try
{
MyClass* myclass_ptr;
myclass_ptr = new (std::nothrow) MyClass;
if (myclass_ptr==NULL)
{
// ANY CLEANUP NEEDED HERE ?
return;
}
std::map<int, char> myintcharmap; // bad_alloc could be thrown from here
}
catch(...)
{
// ANY CLEANUP NEEDED HERE ?
return;
}
}
Please can someone have a look and help.

You have two main potential leaks in the code you show. Both of which arguably stem from using raw pointers. You should prefer using std::unique_ptr (if you have C++11) or other similar "smart" pointers to indicate ownership, and for exception safety in general. Modern guidance is to avoid almost all usage of new or delete; when they cannot be avoided, they need to be paired. Note that your code has two calls to new but none to delete.
Inside function, the core problem is you could fully allocate data "owned" by myclass_ptr, cause an exception in later allocations, and then not be able to clean it up as myclass_ptr is no longer in scope.
Let's say you fixed that so it cleaned up the MyClass instance if an exception occurred after its creation. Your code would still leak because inside MyClass there's currently a similar problem with integer_vector. Although you could follow the rule of three and write a destructor to handle this case, it's probably easier to use a smart pointer here as well.
Exception handling is much bigger, much more opinionated topic. I'll leave it with the summary that it's typically bad to catch exceptions and squash them (usually that's only legit in an outer loop of a program that needs specific kinds of stability). It's also typically bad to catch exceptions in a scope so narrow that you don't know how to handle them. (For example, how would function decide whether to try again, give up, or use another approach? Its caller, or the caller further on up the chain, may have more information and be in a better place to handle this.)

In most cases you should not deal with bad_alloc exceptions. Your try/catch should be removed, as well as if (myclass_ptr==NULL).
Ask yourself : if the process memory is exhausted, what could I possibly do ? Best thing you can hope is to log something, clean up / free the system resources, and let the program terminates. This is the only right thing to do.
You can do this by setting the new_handler (with set_new_handler). It will be called by the new operator if memory allocation failed.
std::set_new_handler(my_handler); // do your cleanup in 'my_handler'

Related

Is a "catch all" block which deallocates dynamically allocated memory and then rethrows a valid/good design choice?

I am usually making use of modern C++'s features like smart pointers and rarely using raw pointers as handlers for dynamically allocated objects. Therefore, I don't have much experience with deallocation. I've wondered whether the following code example would be a valid choice of design to prevent exception induced memory leak:
void HttpListener::spawnRequestHandler(const http_request& request) {
std::thread handlerThread([request](){
IRequestHandler* handler = new HttpRequestHandler(request);
try {
handler->handleRequest();
}
catch (...){
delete handler;
std::rethrow_exception(std::current_exception());
}
});
handlerThread.detach();
}
Here is already a good answer.
However, it seems there are more issues with the code. Where is the request handler raw pointer supposed to be cleaned up ?
Maybe it gets owned by http_request (unlikely) or deletes itself inside handleRequest()(also unlikely), we cannot see this from the example (but both would be bad practice). It looks like a memory leak.
Also there is no need to explicitly use the interface IRequestHandler.
To sum up, the code (in the thread) might be simplified to:
HttpRequestHandler handler(request);
handler.handleRequest();
Additionally you do not need pointers to base class, you can use references just as well:
IRequestHandler& handlerInterface = handler;
What you suggest will work, however, I would recommend against it.
C++ is one of very few languages that has the concept of RAII, which is created to do exception safe code.
In short, of you have a call, it gets/creates the owning thing it's constructor and does the cleanup in it's destructor.
Some very good examples of this are str::unique_ptr and std::scoped_lock. However this is also very useful if you have multiple return statements.
In this case, I would adapt the code to:
void HttpListener::spawnRequestHandler(const http_request& request) {
std::thread handlerThread([request](){
auto handler = std::make_unique<HttpRequestHandler>(request);
handler->handleRequest();
handler.release();
});
handlerThread.detach();
}
As you can see, the code is smaller and easier to read. And you can stop the delete after all ended correctly via the release. Not sure if that's a big in your original code. Though, if intended, this makes it explicit, which is easier to debug afterwards.
If you need other actions that need to happen at destruction, scope guard can be very useful. Not sure about the link, though, the presentation by Andrei Alexandrescu this is based on, had all features you can imagine.
Note: if you don't need the release, you can as well create the instance on the stack.
I would suggest to also put the memory allocation into the try block and be more specific on which exceptions to catch, e.g.
IRequestHandler* handler = nullptr;
try {
handler = new HttpRequestHandler(request);
handler->handleRequest();
}
catch(const std::bad_alloc& e) {
log("Not enough heap memory."); //however you log or use cout
}
catch(const HttpRequestHandlerExeption1& e) {
delete handler;
HttpRequestHandlerstd::rethrow_exception(std::current_exception());
}
.
.
.
delete handler;
A good post about catching multiple exceptions you can find here

Difference between using try-Catch exception handler and if else condition check? [duplicate]

This question already has answers here:
Is there a general consensus in the C++ community on when exceptions should be used? [closed]
(11 answers)
Closed 9 years ago.
I have used in many places if...else statements, however I'm new to exception handling. What is the main difference among these two?
for eg:
int *ptr = new (nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
OR
try
{
int* myarray= new int[1000];
}
catch (exception& e)
{
cout << "Standard exception: " << e.what() << endl;
}
So we are using here standard class for exception which has some in build function like e.what(). So it may be advantage. Other than that all other functionality handling we can do using if...else also. Is there any other merits in using exception handling?
To collect what the comments say in an answer:
since the standardization in 1998, new does not return a null pointer at failure but throws an exception, namely std::bad_alloc. This is different to C's malloc and maybe to some early pre-standard implementations of C++, where new might have returned NULL as well (I don't know, tbh).
There is a possibility in C++, to get a nullpointer on allocation failure instead of an exception as well:
int *ptr = new(std::nothrow) int[1000];
So in short, the first code you have will not work as intended, as it is an attempt of C-style error handling in the presence of C++ exceptions. If allocation fails, the exception will be thrown, the if block will never be entered and the program probably will be terminated since you don't catch the bad_alloc.
There are lots of articles comparing general error handling with exceptions vs return codes, and it would go way to far trying to cover the topic here. Amongst the reasons for exceptions are
Function return types are not occupied by the error handling but can return real values - no "output" function parameters needed.
You do not need to handle the return of every single function call in every single function but can just catch the exception some levels up the call stack where you actually can handle the error
Exceptions can pass arbitraty information to the error handling site, as compared to one global errno variable and a single returned error code.
The main difference is that the version using exception handling at least might work, where the one using the if statement can't possibly work.
Your first snippet:
int *ptr = new int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
...seems to assume that new will return a null pointer in case of failure. While that was true at one time, it hasn't been in a long time. With any reasonably current compiler, the new has only two possibilities: succeed or throw. Therefore, your second version aligns with how C++ is supposed to work.
If you really want to use this style, you can rewrite the code to get it to return a null pointer in case of failure:
int *ptr = new(nothrow) int[1000];
if (ptr == NULL) {
// Handle error cases here...
}
In most cases, you shouldn't be using new directly anyway -- you should really use std::vector<int> p(1000); and be done with it.
With that out of the way, I feel obliged to add that for an awful lot of code, it probably makes the most sense to do neither and simply assume that the memory allocation will succeed.
At one time (MS-DOS) it was fairly common for memory allocation to actually fail if you tried to allocate more memory than was available -- but that was a long time ago. Nowadays, things aren't so simple (as a rule). Current systems use virtual memory, which makes the situation much more complicated.
On Linux, what'll typically happen is that even the memory isn't really available, Linux will do what's called an "overdcommit". You'll still get a non-null pointer as if the allocation had succeeded -- but when you try to use the memory, bad things will happen. Specifically, Linux has what's called an "OOM Killer" that basically assumes that running out of memory is a sign of a bug, so if it happens, it tries to find the buggy program(s), and kills it/them. For most practical purpose, this means your program will probably be killed, and other (semi-arbitrarily chosen) ones may be as well.
Windows stays a little closer to the model C++ expects, so if (for example) your code were running on an unattended server, the allocation might actually fail. Long before it fails, however, it'll drag the rest of the machine to its knees, madly swapping in a doomed attempt at making the allocation succeed. If the user is actually operating the machine at the time, they'll typically either kill your program or else kill some others to free up enough memory for your code to get the requested memory fairly quickly.
In none of these cases is it particularly realistic to program against the assumption that an allocation can fail though. For most practical purposes, one of two things happens: either the allocation succeeds, or the program dies.
That leads back to the previous advice: in a typical case, you should generally just use std::vector, and assume your allocation will succeed. If you need to provide availability beyond that, you just about need to do it some other way (such as re-starting the process if it dies, preferably in a way that uses less memory).
As already mentioned, your original if-else example would still throw an exception from C++98 onwards, though adding nothrow (as edited) should make it work as desired (return null, thus trigger if-statement).
Below I'll assume, for simplicity, that, for if-else to handle exceptions, we have functions returning false on exception.
Some advantages of exceptions above if-else, off the top of my head:
You know the type of the exception for logging / debugging / bug fixing
Example:
When a function throws an exception, you can, to a reasonable extent, tell whether there may be a problem with the code or something that you can't do much about like an out of memory exception.
With the if-else, when a function returns false, you have no idea what happened in that function.
You can of course have separate logging to record this information, but why not just return an exception with the exception details included instead?
You needn't have a mess of if-else conditions to propagate the exception to the calling function
Example: (comments included to indicate behaviour)
bool someFunction() // may return false on exception
{
if (someFunction2()) // may return false on exception
return false;
if (someFunction3()) // may return false on exception
return false;
return someFunction4(); // may return false on exception
}
(There are many people who don't like having functions with multiple return statements. In this case, you'll have an even messier function.)
As opposed to:
void someFunction() // may throw exception
{
someFunction2(); // may throw exception
someFunction3(); // may throw exception
someFunction4(); // may throw exception
}
An alternative to, or extension of, if-else is error codes. For this, the second point will remain. See this for more on the comparison between that and exceptions.
If you handle the error locally, if ... else is cleaner. If the function where the error occurs doesn't handle the error, then throw an exception to pass off to someone higher in the call chain.
First of all your first code with if statement will terminate program in case of exception thrown by new[] operator because of not handled exception. You can check such thing here for example: http://www.cplusplus.com/reference/new/operator%20new%5B%5D/
Also exceptions are thrown in many other cases, not only when allocation failed and their main feature (in my eyes) is moving control in application up (to place where exception is handled). I recommend you read some more about exceptions, good read would be "More Effective C++" by Scott Meyers, there is great chapter on exceptions.

When has RAII an advantage over GC?

Consider this simple class that demonstrates RAII in C++ (From the top of my head):
class X {
public:
X() {
fp = fopen("whatever", "r");
if (fp == NULL)
throw some_exception();
}
~X() {
if (fclose(fp) != 0){
// An error. Now what?
}
}
private:
FILE *fp;
X(X const&) = delete;
X(X&&) = delete;
X& operator=(X const&) = delete;
X& operator=(X&&) = delete;
}
I can't throw an exception in the destructor. I m having an error, but no way to report it. And this example is quite generic: I can do this not only with files, but also with e.g posix threads, graphical resources, ... I note how e.g. the wikipedia RAII page sweeps the whole issue under the rug: http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
It seems to me that RAII is only usefull if the destruction is guaranteed to happen without error. The only resources known to me with this property is memory. Now it seems to me that e.g. Boehm pretty convincingly debunks the idea of manual memory management is a good idea in any common situation, so where is the advantage in the C++ way of using RAII, ever?
Yes, I know GC is a bit heretic in the C++ world ;-)
RAII, unlike GC, is deterministic. You will know exactly when a resource will be released, as opposed to "sometime in the future it's going to be released", depending on when the GC decides it needs to run again.
Now on to the actual problem you seem to have. This discussion came up in the Lounge<C++> chat room a while ago about what you should do if the destructor of a RAII object might fail.
The conclusion was that the best way would be provide a specific close(), destroy(), or similar member function that gets called by the destructor but can also be called before that, if you want to circumvent the "exception during stack unwinding" problem. It would then set a flag that would stop it from being called in the destructor. std::(i|o)fstream for example does exactly that - it closes the file in its destructor, but also provides a close() method.
This is a straw man argument, because you're not talking about garbage collection (memory deallocation), you're talking about general resource management.
If you misused a garbage collector to close files this way, then you'd have the identical situation: you also could not throw an exception. The same options would be open to you: ignoring the error, or, much better, logging it.
The exact same problem occurs in garbage collection.
However, it's worth noting that if there is no bug in your code nor in the library code which powers your code, deletion of a resource shall never fail. delete never fails unless you corrupted your heap. This is the same story for every resource. Failure to destroy a resource is an application-terminating crash, not a pleasant "handle me" exception.
Exceptions in destructors are never useful for one simple reason: Destructors destruct objects that the running code doesn't need anymore. Any error that happens during their deallocation can be safely handled in a context-agnostic way, like logging, displaying to the user, ignoring or calling std::terminate. The surrounding code doesn't care because it doesn't need the object anymore. Therefore, you don't need to propagate an exception through the stack and abort the current computation.
In your example, fp could be safely pushed into a global queue of non-closeable files and handled later. The calling code can continue without problems.
By this argument, destructors very rarely have to throw. In practice, they really do rarely throw, explaining the widespread use of RAII.
First: you can't really do anything useful with the error if your file object is GCed, and fails to close the FILE*. So the two are equivalent as far as that goes.
Second, the "correct" pattern is as follows:
class X{
FILE *fp;
public:
X(){
fp=fopen("whatever","r");
if(fp==NULL) throw some_exception();
}
~X(){
try {
close();
} catch (const FileError &) {
// perhaps log, or do nothing
}
}
void close() {
if (fp != 0) {
if(fclose(fp)!=0){
// may need to handle EAGAIN and EINTR, otherwise
throw FileError();
}
fp = 0;
}
}
};
Usage:
X x;
// do stuff involving x that might throw
x.close(); // also might throw, but if not then the file is successfully closed
If "do stuff" throws, then it pretty much doesn't matter whether the file handle is closed successfully or not. The operation has failed, so the file is normally useless anyway. Someone higher up the call chain might know what to do about that, depending how the file is used - perhaps it should be deleted, perhaps left alone in its partially-written state. Whatever they do, they must be aware that in addition to the error described by the exception they see, it's possible that the file buffer wasn't flushed.
RAII is used here for managing resources. The file gets closed no matter what. But RAII is not used for detecting whether an operation has succeeded - if you want to do that then you call x.close(). GC is also not used for detecting whether an operation has succeeded, so the two are equal on that count.
You get a similar situation whenever you use RAII in a context where you're defining some kind of transaction -- RAII can roll back an open transaction on an exception, but assuming all goes OK, the programmer must explicitly commit the transaction.
The answer to your question -- the advantage of RAII, and the reason you end up flushing or closing file objects in finally clauses in Java, is that sometimes you want the resource to be cleaned up (as far as it can be) immediately on exit from the scope, so that the next bit of code knows that it has already happened. Mark-sweep GC doesn't guarantee that.
I want to chip in a few more thoughts relating to "RAII" vs. GC. The aspects of using some sort of a close, destroy, finish, whatever function are already explained as is the aspect of deterministic resource release. There are, at least, two more important facilities which are enabled by using destructors and, thus, keeping track of resources in a programmer controlled fashion:
In the RAII world it is possible to have a stale pointer, i.e. a pointer which points to an already destroyed object. What sounds like a Bad Thing actually enables related objects to be located in close proximity in memory. Even if they don't fit onto the same cache-line they would, at least, fit into the memory page. To some extend closer proximity could be achieved by a compacting garbage collector, as well, but in the C++ world this comes naturally and is determined already at compile-time.
Although typically memory is just allocated and released using operators new and delete it is possible to allocate memory e.g. from a pool and arrange for an even compacter memory use of objects known to be related. This can also be used to place objects into dedicated memory areas, e.g. shared memory or other address ranges for special hardware.
Although these uses don't necessarily use RAII techniques directly, they are enabled by the more explicit control over memory. That said, there are also memory uses where garbage collection has a clear advantage e.g. when passing objects between multiple threads. In an ideal world both techniques would be available and C++ is taking some steps to support garbage collection (sometimes referred to as "litter collection" to emphasize that it is trying to give an infinite memory view of the system, i.e. collected objects aren't destroyed but their memory location is reused). The discussions so far don't follow the route taken by C++/CLI of using two different kinds of references and pointers.
Q. When has RAII an advantage over GC?
A. In all the cases where destruction errors are not interesting (i.e. you don't have an effective way to handle those anyway).
Note that even with garbage collection, you'd have to run the 'dispose' (close,release whatever) action manually, so you can just improve the RIIA pattern in the very same way:
class X{
FILE *fp;
X(){
fp=fopen("whatever","r");
if(fp==NULL) throw some_exception();
}
void close()
{
if (!fp)
return;
if(fclose(fp)!=0){
throw some_exception();
}
fp = 0;
}
~X(){
if (fp)
{
if(fclose(fp)!=0){
//An error. You're screwed, just throw or std::terminate
}
}
}
}
Destructors are assumed to be always success. Why not just make sure that fclose won't fail?
You can always do fflush or some other things manually and check error to make sure that fclose will succeed later.

Any pitfalls with allocating exceptions on the heap?

Question says it all: Are there any pitfalls with allocating exceptions on the heap?
I am asking because allocating exceptions on the heap, in combination with the polymorphic exception idiom, solve the problem of transporting exceptions between threads (for the sake of discussion, assume that I can't use exception_ptr). Or at least I think it does...
Some of my thoughts:
The handler of the exception will have to catch the exception and know how to delete it. This can be solved by actually throwing an auto_ptr with the appropriate deleter.
Are there other ways to transport exceptions across threads?
Are there any pitfalls with allocating exceptions on the heap?
One obvious pitfall is that a heap allocation might fail.
Interestingly, when an exception is thrown it actually throws a copy of the exception object that is the argument of throw. When using gcc, it creates that copy in the heap but with a twist. If heap allocation fails it uses a static emergency buffer instead of heap:
extern "C" void *
__cxxabiv1::__cxa_allocate_exception(std::size_t thrown_size) throw()
{
void *ret;
thrown_size += sizeof (__cxa_refcounted_exception);
ret = malloc (thrown_size);
if (! ret)
{
__gnu_cxx::__scoped_lock sentry(emergency_mutex);
bitmask_type used = emergency_used;
unsigned int which = 0;
if (thrown_size > EMERGENCY_OBJ_SIZE)
goto failed;
while (used & 1)
{
used >>= 1;
if (++which >= EMERGENCY_OBJ_COUNT)
goto failed;
}
emergency_used |= (bitmask_type)1 << which;
ret = &emergency_buffer[which][0];
failed:;
if (!ret)
std::terminate ();
}
}
So, one possibility is to replicate this functionality to protect from heap allocation failures of your exceptions.
The handler of the exception will have to catch the exception and know how to delete it. This can be solved by actually throwing an auto_ptr with the appropriate deleter.
Not sure if using auto_ptr<> is a good idea. This is because copying auto_ptr<> destroys the original, so that after catching by value as in catch(std::auto_ptr<std::exception> e) a subsequent throw; with no argument to re-throw the original exception may throw a NULL auto_ptr<> because it was copied from (I didn't try that).
I would probably throw a plain pointer for this reason, like throw new my_exception(...) and catch it by value and manually delete it. Because manual memory management leaves a way to leak memory I would create a small library for transporting exceptions between threads and put such low level code in there, so that the rest of the code doesn't have to be concerned with memory management issues.
Another issue is that requiring a special syntax for throw, like throw new exception(...), may be a bit too intrusive, that is, there may be existing code or third party libraries that can't be changed that throw in a standard manner like throw exception(...). It may be a good idea just to stick to the standard throw syntax and catch all possible exception types (which must be known in advance, and as a fall-back just slice the exception and only copy a base class sub-object) in a top-level thread catch block, copy that exception and re-throw the copy in the other thread (probably on join or in the function that extracts the other's thread result, although the thread that throws may be stand-alone and yield no result at all, but that is a completely another issue and we assume we deal with some kind of worker thread with limited lifetime). This way the exception handler in the other thread can catch the exception in a standard way by reference or by value without having to deal with the heap. I would probably choose this path.
You may also take a look at Boost: Transporting of Exceptions Between Threads.
There are two evident one:
One - easy - is that doing throw new myexception risk to throw a bad_alloc (not bad_alloc*), hence catch exception* doesn't catch the eventual impossible to allocate exception. And throwing new(nothrow) myexception may throw ... a null pointer.
Another - more a design issue - is "who has to catch". If it is not yourself, consider a situation where your client, that may also be a client of somebody else, has - depending on who's throwing - to decide if delete or not. May result in a mess.
A typical way to solve the problem is throwing a static variable by reference (or address): doesn't need to be deleted and doesn't require to be copied when going down in the unrolled stack
If the reason you're throwing an exception is that there's a problem with the heap (out of memory or otherwise) allocating your exception on the heap will just cause more problems.
There is another catch when throwing std::auto_ptr<SomeException>, and also boost shared pointers: while runtime_error is derived from exception, auto_ptr<runtime_error> is not derived from auto_ptr<exception>.
As a result, a catch(auto_ptr<exception> &) won't catch an auto_ptr<runtime_error>.

Risking the exception anti-pattern.. with some modifications

Lets say that I have a library which runs 24x7 on certain machines. Even if the code is rock solid, a hardware fault can sooner or later trigger an exception. I would like to have some sort of failsafe in position for events like this. One approach would be to write wrapper functions that encapsulate each api a:
returnCode=DEFAULT;
try
{
returnCode=libraryAPI1();
}
catch(...)
{
returnCode=BAD;
}
return returnCode;
The caller of the library then restarts the whole thread, reinitializes the module if the returnCode is bad.
Things CAN go horribly wrong. E.g.
if the try block(or libraryAPI1()) had:
func1();
char *x=malloc(1000);
func2();
if func2() throws an exception, x will never be freed. On a similar vein, file corruption is a possible outcome.
Could you please tell me what other things can possibly go wrong in this scenario?
This code:
func1();
char *x=malloc(1000);
func2();
Is not C++ code. This is what people refer to as C with classes. It is a style of program that looks like C++ but does not match up to how C++ is used in real life. The reason is; good exception safe C++ code practically never requires the use of pointer (directly) in code as pointers are always contained inside a class specifically designed to manage their lifespan in an exception safe manor (Usually smart pointers or containers).
The C++ equivalent of that code is:
func1();
std::vector<char> x(1000);
func2();
A hardware failure may not lead to a C++ exception. On some systems, hardware exceptions are a completely different mechanism than C++ exceptions. On others, C++ exceptions are built on top of the hardware exception mechanism. So this isn't really a general design question.
If you want to be able to recover, you need to be transactional--each state change needs to run to completion or be backed out completely. RAII is one part of that. As Chris Becke points out in another answer, there's more to state than resource acquisition.
There's a copy-modify-swap idiom that's used a lot for transactions, but that might be way too heavy if you're trying to adapt working code to handle this one-in-a-million case.
If you truly need robustness, then isolate the code into a process. If a hardware fault kills the process, you can have a watchdog restart it. The OS will reclaim the lost resources. Your code would only need to worry about being transactional with persistent state, like stuff saved to files.
Do you have control over libraryAPI implementation ?
If it can fit into OO model, you need to design it using RAII pattern, which guarantees the destructor (who will release acquired resources) to be invoked on exception.
usage of resource-manage-helper such as smart pointer do help too
try
{
someNormalFunction();
cSmartPtr<BYTE> pBuf = malloc(1000);
someExceptionThrowingFunction();
}
catch(...)
{
// Do logging and other necessary actions
// but no cleaning required for <pBuf>
}
The problem with exeptions is - even if you do re-engineer with RAiI - its still easy to make code that becomes desynchronized:
void SomeClass::SomeMethod()
{
this->stateA++;
SomeOtherMethod();
this->stateB++;
}
Now, the example might look artifical, but if you substitue stateA++ and stateB++ for operations that change the state of the class in some way, the expected outcome of this class is for the states to remain in sync. RAII might solve some of the problems associated with state when using exceptions, but all it does is provide a false sense of security - If SomeOtherMethod() throws an exception ALL the surrounding code needs to be analyzed to ensure that the post conditions (stateA.delta == stateB.delta) are met.