Who is responsible for delete? - c++

I was analyzing a code and I am confused on a particular code.
I have posted the code/pseudo-code which will convey the same meaning.
Class 1
Class1::Func1()
{
Collection* cltn;
try
{
cltn = Class2::get_records_from_db();
}
catch(Informix error)
{}
catch(DB Error)
{}
catch(...)
{ Unknown exception } //I get this error always once the process processes lot of records
}
Class 2
Collection* Class2::get_records_from_db()
{
Collection *clt = new Collection();
try
{
//Query database
For each row in query result
Row *row = new row();
populate(row)
clt->add(*row)
...
if( Informix error)
{
throw Informix error;
}
}
catch(...)
{
delete clt; //Who will delete row?
clt = 0;
throw Db error
}
return clt; //Who will delete clt?
}
Problem - PART 2
Thanks for the insights on the first problem. Now here is the real problem which is happening.
Class 1 is a C++ process and Class 2 is a library which talks to Informix database.
Class2::get_records_from_db() is a function which queries an Informix DB and returns the result-set. I have enhanced the above code which is more similar to the real code.
Collection objects deals with 200k of row objects, which as most of you said is not released properly.
The caller is seeing "Unknown exception" in the general catch block. Can that be because of the huge memory leaks created in Class 2?
I also see some Informix errors 406 (Out of memory error) in the logs. The process core-dumps after spitting out a series of Unknown Exception & SQLERR406
I want to know whether the core dump is a byproduct of the memory leaks.

What is the problem with the code you presented?
The code example you present is a very bad and wrong code.
No one deletes either of (row and clt) them. This leads to a memory leak or a Undefined Behavior depending on whether their destructors have trivial or nontrivial implementation.Either way it means very bad things can happen.
If you allocate an object using new you need to explicitly deallocate it by calling delete on the pointer returned by new. Since you do not call delete on either of the pointers, they both are never deallocated at all.
Who should be responsible for delete?
The objects themselves!
The objects should have an inbuilt functionality to de-allocate themselves as soon as their scope({,}) ends. This way no one needs to explicitly deallocate any of the objects, but they get implicitly deleted once they are not needed anymore. This technique is popularly known as Resource Allocation is Initialization(RAII) or Scope Bound Resource Management(SBRM) in C++.
Each of your objects(row and clt) should be using a RAII by writing wrappers over these raw pointers or even better simply by using readily available Smart pointers.

Smart pointers are what you need. You should put each new Row into std::shared_ptr<Row> row instead of a pointer; those shared_ptrs will be automatically cleaned up when they go out of scope (eg, when the try-catch block exits).
What you should do with 'clt isn't quite so clear cut... I'd be tempted to store it in a std::unique_ptr<Collection> and return that because then it is clear that a) it will be automatically deleted at some point (potentially when your program exits) and b) it is clear to calling code that they now own the value returned by get_records_from_db(), not the Class2 instance (or singleton) that generated it.
Clear ownership semantics are a good thing.

Related

Is a "catch all" block which deallocates dynamically allocated memory and then rethrows a valid/good design choice?

I am usually making use of modern C++'s features like smart pointers and rarely using raw pointers as handlers for dynamically allocated objects. Therefore, I don't have much experience with deallocation. I've wondered whether the following code example would be a valid choice of design to prevent exception induced memory leak:
void HttpListener::spawnRequestHandler(const http_request& request) {
std::thread handlerThread([request](){
IRequestHandler* handler = new HttpRequestHandler(request);
try {
handler->handleRequest();
}
catch (...){
delete handler;
std::rethrow_exception(std::current_exception());
}
});
handlerThread.detach();
}
Here is already a good answer.
However, it seems there are more issues with the code. Where is the request handler raw pointer supposed to be cleaned up ?
Maybe it gets owned by http_request (unlikely) or deletes itself inside handleRequest()(also unlikely), we cannot see this from the example (but both would be bad practice). It looks like a memory leak.
Also there is no need to explicitly use the interface IRequestHandler.
To sum up, the code (in the thread) might be simplified to:
HttpRequestHandler handler(request);
handler.handleRequest();
Additionally you do not need pointers to base class, you can use references just as well:
IRequestHandler& handlerInterface = handler;
What you suggest will work, however, I would recommend against it.
C++ is one of very few languages that has the concept of RAII, which is created to do exception safe code.
In short, of you have a call, it gets/creates the owning thing it's constructor and does the cleanup in it's destructor.
Some very good examples of this are str::unique_ptr and std::scoped_lock. However this is also very useful if you have multiple return statements.
In this case, I would adapt the code to:
void HttpListener::spawnRequestHandler(const http_request& request) {
std::thread handlerThread([request](){
auto handler = std::make_unique<HttpRequestHandler>(request);
handler->handleRequest();
handler.release();
});
handlerThread.detach();
}
As you can see, the code is smaller and easier to read. And you can stop the delete after all ended correctly via the release. Not sure if that's a big in your original code. Though, if intended, this makes it explicit, which is easier to debug afterwards.
If you need other actions that need to happen at destruction, scope guard can be very useful. Not sure about the link, though, the presentation by Andrei Alexandrescu this is based on, had all features you can imagine.
Note: if you don't need the release, you can as well create the instance on the stack.
I would suggest to also put the memory allocation into the try block and be more specific on which exceptions to catch, e.g.
IRequestHandler* handler = nullptr;
try {
handler = new HttpRequestHandler(request);
handler->handleRequest();
}
catch(const std::bad_alloc& e) {
log("Not enough heap memory."); //however you log or use cout
}
catch(const HttpRequestHandlerExeption1& e) {
delete handler;
HttpRequestHandlerstd::rethrow_exception(std::current_exception());
}
.
.
.
delete handler;
A good post about catching multiple exceptions you can find here

Exceptions on unique_ptr and make_unique [duplicate]

There is a method called foo that sometimes returns the following error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Abort
Is there a way that I can use a try-catch block to stop this error from terminating my program (all I want to do is return -1)?
If so, what is the syntax for it?
How else can I deal with bad_alloc in C++?
In general you cannot, and should not try, to respond to this error. bad_alloc indicates that a resource cannot be allocated because not enough memory is available. In most scenarios your program cannot hope to cope with that, and terminating soon is the only meaningful behaviour.
Worse, modern operating systems often over-allocate: on such systems, malloc and new can return a valid pointer even if there is not enough free memory left – std::bad_alloc will never be thrown, or is at least not a reliable sign of memory exhaustion. Instead, attempts to access the allocated memory will then result in a segmentation fault, which is not catchable (you can handle the segmentation fault signal, but you cannot resume the program afterwards).
The only thing you could do when catching std::bad_alloc is to perhaps log the error, and try to ensure a safe program termination by freeing outstanding resources (but this is done automatically in the normal course of stack unwinding after the error gets thrown if the program uses RAII appropriately).
In certain cases, the program may attempt to free some memory and try again, or use secondary memory (= disk) instead of RAM but these opportunities only exist in very specific scenarios with strict conditions:
The application must ensure that it runs on a system that does not overcommit memory, i.e. it signals failure upon allocation rather than later.
The application must be able to free memory immediately, without any further accidental allocations in the meantime.
It’s exceedingly rare that applications have control over point 1 — userspace applications never do, it’s a system-wide setting that requires root permissions to change.1
OK, so let’s assume you’ve fixed point 1. What you can now do is for instance use a LRU cache for some of your data (probably some particularly large business objects that can be regenerated or reloaded on demand). Next, you need to put the actual logic that may fail into a function that supports retry — in other words, if it gets aborted, you can just relaunch it:
lru_cache<widget> widget_cache;
double perform_operation(int widget_id) {
std::optional<widget> maybe_widget = widget_cache.find_by_id(widget_id);
if (not maybe_widget) {
maybe_widget = widget_cache.store(widget_id, load_widget_from_disk(widget_id));
}
return maybe_widget->frobnicate();
}
…
for (int num_attempts = 0; num_attempts < MAX_NUM_ATTEMPTS; ++num_attempts) {
try {
return perform_operation(widget_id);
} catch (std::bad_alloc const&) {
if (widget_cache.empty()) throw; // memory error elsewhere.
widget_cache.remove_oldest();
}
}
// Handle too many failed attempts here.
But even here, using std::set_new_handler instead of handling std::bad_alloc provides the same benefit and would be much simpler.
1 If you’re creating an application that does control point 1, and you’re reading this answer, please shoot me an email, I’m genuinely curious about your circumstances.
You can catch it like any other exception:
try {
foo();
}
catch (const std::bad_alloc&) {
return -1;
}
Quite what you can usefully do from this point is up to you, but it's definitely feasible technically.
What is the C++ Standard specified behavior of new in c++?
The usual notion is that if new operator cannot allocate dynamic memory of the requested size, then it should throw an exception of type std::bad_alloc.
However, something more happens even before a bad_alloc exception is thrown:
C++03 Section 3.7.4.1.3: says
An allocation function that fails to allocate storage can invoke the currently installed new_handler(18.4.2.2), if any. [Note: A program-supplied allocation function can obtain the address of the currently installed new_handler using the set_new_handler function (18.4.2.3).] If an allocation function declared with an empty exception-specification (15.4), throw(), fails to allocate storage, it shall return a null pointer. Any other allocation function that fails to allocate storage shall only indicate failure by throw-ing an exception of class std::bad_alloc (18.4.2.1) or a class derived from std::bad_alloc.
Consider the following code sample:
#include <iostream>
#include <cstdlib>
// function to call if operator new can't allocate enough memory or error arises
void outOfMemHandler()
{
std::cerr << "Unable to satisfy request for memory\n";
std::abort();
}
int main()
{
//set the new_handler
std::set_new_handler(outOfMemHandler);
//Request huge memory size, that will cause ::operator new to fail
int *pBigDataArray = new int[100000000L];
return 0;
}
In the above example, operator new (most likely) will be unable to allocate space for 100,000,000 integers, and the function outOfMemHandler() will be called, and the program will abort after issuing an error message.
As seen here the default behavior of new operator when unable to fulfill a memory request, is to call the new-handler function repeatedly until it can find enough memory or there is no more new handlers. In the above example, unless we call std::abort(), outOfMemHandler() would be called repeatedly. Therefore, the handler should either ensure that the next allocation succeeds, or register another handler, or register no handler, or not return (i.e. terminate the program). If there is no new handler and the allocation fails, the operator will throw an exception.
What is the new_handler and set_new_handler?
new_handler is a typedef for a pointer to a function that takes and returns nothing, and set_new_handler is a function that takes and returns a new_handler.
Something like:
typedef void (*new_handler)();
new_handler set_new_handler(new_handler p) throw();
set_new_handler's parameter is a pointer to the function operator new should call if it can't allocate the requested memory. Its return value is a pointer to the previously registered handler function, or null if there was no previous handler.
How to handle out of memory conditions in C++?
Given the behavior of newa well designed user program should handle out of memory conditions by providing a proper new_handlerwhich does one of the following:
Make more memory available: This may allow the next memory allocation attempt inside operator new's loop to succeed. One way to implement this is to allocate a large block of memory at program start-up, then release it for use in the program the first time the new-handler is invoked.
Install a different new-handler: If the current new-handler can't make any more memory available, and of there is another new-handler that can, then the current new-handler can install the other new-handler in its place (by calling set_new_handler). The next time operator new calls the new-handler function, it will get the one most recently installed.
(A variation on this theme is for a new-handler to modify its own behavior, so the next time it's invoked, it does something different. One way to achieve this is to have the new-handler modify static, namespace-specific, or global data that affects the new-handler's behavior.)
Uninstall the new-handler: This is done by passing a null pointer to set_new_handler. With no new-handler installed, operator new will throw an exception ((convertible to) std::bad_alloc) when memory allocation is unsuccessful.
Throw an exception convertible to std::bad_alloc. Such exceptions are not be caught by operator new, but will propagate to the site originating the request for memory.
Not return: By calling abort or exit.
I would not suggest this, since bad_alloc means you are out of memory. It would be best to just give up instead of attempting to recover. However here is is the solution you are asking for:
try {
foo();
} catch ( const std::bad_alloc& e ) {
return -1;
}
I may suggest a more simple (and even faster) solution for this. new operator would return null if memory could not be allocated.
int fv() {
T* p = new (std::nothrow) T[1000000];
if (!p) return -1;
do_something(p);
delete p;
return 0;
}
I hope this could help!
Let your foo program exit in a controlled way:
#include <stdlib.h> /* exit, EXIT_FAILURE */
try {
foo();
} catch (const std::bad_alloc&) {
exit(EXIT_FAILURE);
}
Then write a shell program that calls the actual program. Since the address spaces are separated, the state of your shell program is always well-defined.
Of course you can catch a bad_alloc, but I think the better question is how you can stop a bad_alloc from happening in the first place.
Generally, bad_alloc means that something went wrong in an allocation of memory - for example when you are out of memory. If your program is 32-bit, then this already happens when you try to allocate >4 GB. This happened to me once when I copied a C-string to a QString. The C-string wasn't '\0'-terminated which caused the strlen function to return a value in the billions. So then it attempted to allocate several GB of RAM, which caused the bad_alloc.
I have also seen bad_alloc when I accidentally accessed an uninitialized variable in the initializer-list of a constructor. I had a class foo with a member T bar. In the constructor I wanted to initialize the member with a value from a parameter:
foo::foo(T baz) // <-- mistyped: baz instead of bar
: bar(bar)
{
}
Because I had mistyped the parameter, the constructor initialized bar with itself (so it read an uninitialized value!) instead of the parameter.
valgrind can be very helpful with such errors!

VS2012 Returning vector of Oracle unique_ptr´s crash

I do have a code structure where I read Oracle rows from database and I do assign then to a common model that represents its data (called commonmodel::Model). I´m using VS2012 on Windows 7.
My issue is with this piece of code below, where I do execute some statement like SELECT ...
I´m running for test and the tables are empty, so no data is returned from SELECT....from database, thus the piece of code inside while (resultSet->next()) is not being called.
My program compiles, but it crashes at runtime on returnin data (return retData). I have no idea what´s causing this behaviour and I would like help to solve it.
BTW: I choose to create std::unique_ptr´s for Oracle pointers so that I can leave to the compiler to free these pointers when not needed any mode. In that wat I don´t need to delete them at the end of the operation.
std::vector<std::unique_ptr<commonmodel::Model>> OracleDatabase::ExecuteStmtReturningData(std::string sql, int& totalRecords, commonmodel::Model &modelTemplate)
{
std::unique_ptr<oracle::occi::Statement> stmt(connection->createStatement());
stmt->setAutoCommit(TRUE);
std::unique_ptr<oracle::occi::ResultSet> res(stmt->executeQuery(sql));
std::vector<std::unique_ptr<commonmodel::Model>> ret = getModelsFromResultSet(res, modelTemplate);
return ret;
}
std::vector<std::unique_ptr<commonmodel::Model>> OracleDatabase::getModelsFromResultSet(std::unique_ptr<oracle::occi::ResultSet>& resultSet, commonmodel::Model &modelTemplate)
{
std::vector<std::unique_ptr<commonmodel::Model>> retData;
std::vector<oracle::occi::MetaData> resultMeta = resultSet->getColumnListMetaData();
while (resultSet->next())
{
std::unique_ptr<commonmodel::Model> model = modelTemplate.clone();
for (unsigned int i = 1; i <= resultMeta.size(); i++) // ResultSet starts with one, not zero
{
std::string label = resultMeta.at(i).getString(oracle::occi::MetaData::ATTR_NAME);
setPropertyFromResultSet(resultSet, label, i, *model);
}
retData.push_back(std::move(model)); // unique_ptr can only be copied or moved.
}
return retData; <<<==== CRASH ON RETURN....
}
You can't use 'stock' unique_ptr to deal with OCCI objects and pointers. OCCI doesn't want to you to delete those pointers (this is what unique_ptr is going to do), instead, they want you to free them using OCCI-provided mechanisms.
In particular, to free Statement object you should use Connection::terminateStatement. Same for ResultSet*, and any other OCCI pointer for this matter.
Now, you can supply custom deleter into unique_ptr object, but the issue there is that you'd need to use a pointer to already existing 'parent' object for this - and it is hard to manage lifetime of independent pointers in such a way.
On a side note, I strongly advise against using OCCI. It is a very badly designed, not properly documented library. OCI provides a much better choice.

Using STL containers with exception handling in low memory situation

I am using STL containers in my code (being developed in C++ using Visual Studio 2010)
I have never used exception handling with STL containers before. Since STL containers throw bad_alloc exception, I plan to use like in sample code shown below. Let's assume function() gets called in low memory situation.
Now, I am not sure if it is full proof code OR do I need to do any additional cleanup activity.
class MyClass
{
std::vector<int>* integer_vector;
public:
MyClass()
{
std::string message;
message += "my message"; // bad_alloc could be thrown from here
integer_vector = new std::vector<int>; // bad_alloc could be thrown from here
}
};
void function()
{
try
{
MyClass* myclass_ptr;
myclass_ptr = new (std::nothrow) MyClass;
if (myclass_ptr==NULL)
{
// ANY CLEANUP NEEDED HERE ?
return;
}
std::map<int, char> myintcharmap; // bad_alloc could be thrown from here
}
catch(...)
{
// ANY CLEANUP NEEDED HERE ?
return;
}
}
Please can someone have a look and help.
You have two main potential leaks in the code you show. Both of which arguably stem from using raw pointers. You should prefer using std::unique_ptr (if you have C++11) or other similar "smart" pointers to indicate ownership, and for exception safety in general. Modern guidance is to avoid almost all usage of new or delete; when they cannot be avoided, they need to be paired. Note that your code has two calls to new but none to delete.
Inside function, the core problem is you could fully allocate data "owned" by myclass_ptr, cause an exception in later allocations, and then not be able to clean it up as myclass_ptr is no longer in scope.
Let's say you fixed that so it cleaned up the MyClass instance if an exception occurred after its creation. Your code would still leak because inside MyClass there's currently a similar problem with integer_vector. Although you could follow the rule of three and write a destructor to handle this case, it's probably easier to use a smart pointer here as well.
Exception handling is much bigger, much more opinionated topic. I'll leave it with the summary that it's typically bad to catch exceptions and squash them (usually that's only legit in an outer loop of a program that needs specific kinds of stability). It's also typically bad to catch exceptions in a scope so narrow that you don't know how to handle them. (For example, how would function decide whether to try again, give up, or use another approach? Its caller, or the caller further on up the chain, may have more information and be in a better place to handle this.)
In most cases you should not deal with bad_alloc exceptions. Your try/catch should be removed, as well as if (myclass_ptr==NULL).
Ask yourself : if the process memory is exhausted, what could I possibly do ? Best thing you can hope is to log something, clean up / free the system resources, and let the program terminates. This is the only right thing to do.
You can do this by setting the new_handler (with set_new_handler). It will be called by the new operator if memory allocation failed.
std::set_new_handler(my_handler); // do your cleanup in 'my_handler'

When has RAII an advantage over GC?

Consider this simple class that demonstrates RAII in C++ (From the top of my head):
class X {
public:
X() {
fp = fopen("whatever", "r");
if (fp == NULL)
throw some_exception();
}
~X() {
if (fclose(fp) != 0){
// An error. Now what?
}
}
private:
FILE *fp;
X(X const&) = delete;
X(X&&) = delete;
X& operator=(X const&) = delete;
X& operator=(X&&) = delete;
}
I can't throw an exception in the destructor. I m having an error, but no way to report it. And this example is quite generic: I can do this not only with files, but also with e.g posix threads, graphical resources, ... I note how e.g. the wikipedia RAII page sweeps the whole issue under the rug: http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization
It seems to me that RAII is only usefull if the destruction is guaranteed to happen without error. The only resources known to me with this property is memory. Now it seems to me that e.g. Boehm pretty convincingly debunks the idea of manual memory management is a good idea in any common situation, so where is the advantage in the C++ way of using RAII, ever?
Yes, I know GC is a bit heretic in the C++ world ;-)
RAII, unlike GC, is deterministic. You will know exactly when a resource will be released, as opposed to "sometime in the future it's going to be released", depending on when the GC decides it needs to run again.
Now on to the actual problem you seem to have. This discussion came up in the Lounge<C++> chat room a while ago about what you should do if the destructor of a RAII object might fail.
The conclusion was that the best way would be provide a specific close(), destroy(), or similar member function that gets called by the destructor but can also be called before that, if you want to circumvent the "exception during stack unwinding" problem. It would then set a flag that would stop it from being called in the destructor. std::(i|o)fstream for example does exactly that - it closes the file in its destructor, but also provides a close() method.
This is a straw man argument, because you're not talking about garbage collection (memory deallocation), you're talking about general resource management.
If you misused a garbage collector to close files this way, then you'd have the identical situation: you also could not throw an exception. The same options would be open to you: ignoring the error, or, much better, logging it.
The exact same problem occurs in garbage collection.
However, it's worth noting that if there is no bug in your code nor in the library code which powers your code, deletion of a resource shall never fail. delete never fails unless you corrupted your heap. This is the same story for every resource. Failure to destroy a resource is an application-terminating crash, not a pleasant "handle me" exception.
Exceptions in destructors are never useful for one simple reason: Destructors destruct objects that the running code doesn't need anymore. Any error that happens during their deallocation can be safely handled in a context-agnostic way, like logging, displaying to the user, ignoring or calling std::terminate. The surrounding code doesn't care because it doesn't need the object anymore. Therefore, you don't need to propagate an exception through the stack and abort the current computation.
In your example, fp could be safely pushed into a global queue of non-closeable files and handled later. The calling code can continue without problems.
By this argument, destructors very rarely have to throw. In practice, they really do rarely throw, explaining the widespread use of RAII.
First: you can't really do anything useful with the error if your file object is GCed, and fails to close the FILE*. So the two are equivalent as far as that goes.
Second, the "correct" pattern is as follows:
class X{
FILE *fp;
public:
X(){
fp=fopen("whatever","r");
if(fp==NULL) throw some_exception();
}
~X(){
try {
close();
} catch (const FileError &) {
// perhaps log, or do nothing
}
}
void close() {
if (fp != 0) {
if(fclose(fp)!=0){
// may need to handle EAGAIN and EINTR, otherwise
throw FileError();
}
fp = 0;
}
}
};
Usage:
X x;
// do stuff involving x that might throw
x.close(); // also might throw, but if not then the file is successfully closed
If "do stuff" throws, then it pretty much doesn't matter whether the file handle is closed successfully or not. The operation has failed, so the file is normally useless anyway. Someone higher up the call chain might know what to do about that, depending how the file is used - perhaps it should be deleted, perhaps left alone in its partially-written state. Whatever they do, they must be aware that in addition to the error described by the exception they see, it's possible that the file buffer wasn't flushed.
RAII is used here for managing resources. The file gets closed no matter what. But RAII is not used for detecting whether an operation has succeeded - if you want to do that then you call x.close(). GC is also not used for detecting whether an operation has succeeded, so the two are equal on that count.
You get a similar situation whenever you use RAII in a context where you're defining some kind of transaction -- RAII can roll back an open transaction on an exception, but assuming all goes OK, the programmer must explicitly commit the transaction.
The answer to your question -- the advantage of RAII, and the reason you end up flushing or closing file objects in finally clauses in Java, is that sometimes you want the resource to be cleaned up (as far as it can be) immediately on exit from the scope, so that the next bit of code knows that it has already happened. Mark-sweep GC doesn't guarantee that.
I want to chip in a few more thoughts relating to "RAII" vs. GC. The aspects of using some sort of a close, destroy, finish, whatever function are already explained as is the aspect of deterministic resource release. There are, at least, two more important facilities which are enabled by using destructors and, thus, keeping track of resources in a programmer controlled fashion:
In the RAII world it is possible to have a stale pointer, i.e. a pointer which points to an already destroyed object. What sounds like a Bad Thing actually enables related objects to be located in close proximity in memory. Even if they don't fit onto the same cache-line they would, at least, fit into the memory page. To some extend closer proximity could be achieved by a compacting garbage collector, as well, but in the C++ world this comes naturally and is determined already at compile-time.
Although typically memory is just allocated and released using operators new and delete it is possible to allocate memory e.g. from a pool and arrange for an even compacter memory use of objects known to be related. This can also be used to place objects into dedicated memory areas, e.g. shared memory or other address ranges for special hardware.
Although these uses don't necessarily use RAII techniques directly, they are enabled by the more explicit control over memory. That said, there are also memory uses where garbage collection has a clear advantage e.g. when passing objects between multiple threads. In an ideal world both techniques would be available and C++ is taking some steps to support garbage collection (sometimes referred to as "litter collection" to emphasize that it is trying to give an infinite memory view of the system, i.e. collected objects aren't destroyed but their memory location is reused). The discussions so far don't follow the route taken by C++/CLI of using two different kinds of references and pointers.
Q. When has RAII an advantage over GC?
A. In all the cases where destruction errors are not interesting (i.e. you don't have an effective way to handle those anyway).
Note that even with garbage collection, you'd have to run the 'dispose' (close,release whatever) action manually, so you can just improve the RIIA pattern in the very same way:
class X{
FILE *fp;
X(){
fp=fopen("whatever","r");
if(fp==NULL) throw some_exception();
}
void close()
{
if (!fp)
return;
if(fclose(fp)!=0){
throw some_exception();
}
fp = 0;
}
~X(){
if (fp)
{
if(fclose(fp)!=0){
//An error. You're screwed, just throw or std::terminate
}
}
}
}
Destructors are assumed to be always success. Why not just make sure that fclose won't fail?
You can always do fflush or some other things manually and check error to make sure that fclose will succeed later.