Static instance of MSXML2::IXMLDOMDocument2* becoming invaild - c++

I have a C++ dll (x.dll) which exports a class that uses a static instance of MSXML2::IXMLDOMDocument2*.
In X.dll
wrapper.h
class EXPORTEDCLASS wrapper
{
wrapper();
public:
// Some accessor methods.
private:
PIMPL* pImpl;
};
wrapper.cpp
class PIMPL
{
public:
PIMPL();
static MSXML2::IXMLDOMDocumentPtr m_pDomDocument;
static s_bInit;
static void initDomDocument();
};
PIMPL::PIMPL()
{
initDomDocument();
}
void PIMPL::initDomDocument()
{
if(!s_bInit)
{
hr = CoCreateInstance(CLSID_DOMDocument40,NULL, CLSCTX_INPROC_SERVER,
IID_IXMLDOMDocument2, (void**)&m_pDomDocument);
m_pDomDocument->load(strFileName);
s_bInit = true;
}
}
wrapper::wrapper()
{
pImpl = new PIMPL();
}
m_pDomDocument is not released anywhere. But at some places it is only assigned to some
local Smart pointers and they too are not explicitely released.
In the application the first call to the wrapper comes from DllMain of some other dll
This time the m_pDomDocument pointer is created and as such all the calls to the wrapper succeed.
When the next call comes which also happens to be from DllMain of some other dll, I find that s_bInit is true so I dont construct this object again.
But this time somehow m_pDomDocument is invalid. (Its value is same as for the first call but its vptr is invalid)
Can anybody tell me what might be going wrong here?

The issue is resolved.
There was an untimely call to CoUninitialize which used to free the COM library.

Try using this for your COM object creation:
m_pDomDocument.CreateInstance("MSXML2.DOMDocument");

Related

G_LOCK behavior changed from glib 2.46 to glib 2.48?

I'm looking at a piece of code, which did work until recently. Basically, I have a C++ class, in which I protect a variable with a G_LOCK_DEFINE macro.
class CSomeClass {
private:
gulong mSomeCounter;
G_LOCK_DEFINE(mSomeCounter);
public:
CSomeClass ();
}
The constructor is implemented in a separate .cpp file.
CSomeClass::CSomeClass()
{
G_LOCK(mSomeCounter);
mSomeCounter = 0;
G_UNLOCK(mSomeCounter);
}
This variable is accessed in several functions, but the principle is always the same. Now, as already said, the code compiles fine and in fact did also run flawlessly in the past. Now, since recently, I'm getting a deadlock, whenever I come across a G_LOCK command. For debugging, I already restricted the program to just one thread, to exclude logical errors.
I did update to Ubuntu 16.04 beta recently, which pushed my glib version to 2.48.0-1ubuntu4. I already checked the changelog for relevant information on G_LOCK, but couldn't find anything. Did anybody else notice funny effects, when using G_LOCK macros with the recent glib version? Did I miss some changes here?
Firstly, all that G_LOCK_DEFINE does is create a GMutex variable, who's name encodes the name of the variable that it's protecting e.g. G_LOCK_DEFINE(mSomeCounter) becomes GMutex g__mSomeCounter_lock;. So we can expand your code to something like:
class CSomeClass {
private:
gulong mSomeCounter;
GMutex g__mSomeCounter_lock;
public:
CSomeClass ();
};
CSomeClass::CSomeClass()
{
g_mutex_lock(&g__mSomeCounter_lock);
mSomeCounter = 0;
g_mutex_unlock(&g__mSomeCounter_lock);
}
The fundamental problem here is that you're not initializing any of the members of the class CSomeClass. You'll assigning values to some of them in the constructor, but you're definitely not initializing them. There's a difference between the assignment in braces, and using an initializer, such as:
CSomeClass::CSomeClass() : mSomeCounter(0)
As a result, the mutex that's created, named against the variable may contain garbage. There's probably nothing in the glib code that would have changed to cause this to happen, it's more likely that changes to other libraries have changed the memory layout of you app, uncovering the bug.
The glib documentation hints that you need to g_mutex_init mutexes:
that has been allocated on the stack, or as part of a larger structure
You don't need to g_mutex_init mutexes that:
It is not necessary to initialize a mutex that has been statically allocated
Class instances are almost always not statically allocated.
You need to fix your constructor to ensure that it initializes the mutex 'properly' e.g.:
CSomeClass::CSomeClass()
{
g_mutex_init(&G_LOCK_NAME(mSomeCounter));
G_LOCK(mSomeCounter);
mSomeCounter = 0;
G_UNLOCK(mSomeCounter);
}
TBH, I'd put the mutex into a class holder, and initialize it as part of that, rather than the way you're doing it, to ensure that it gets initialized, locked and unlocked as part of the standard C++ RAII semantics.
If you use a small main stub, something like:
main() {
{ CSomeClass class1; }
{ CSomeClass class2; }
{ CSomeClass class3; }
}
and your code, there's a good chance it will hang anyway. (my mac crashed the example with: GLib (gthread-posix.c): Unexpected error from C library during 'pthread_mutex_lock': Invalid argument. Aborting..
some simple, example, non production wrappers to help with RAII:
class CGMutex {
GMutex mutex;
public:
CGMutex() {
g_mutex_init(&mutex);
}
~CGMutex() {
g_mutex_clear(&mutex);
}
GMutex *operator&() {
return &mutex;
}
};
class CGMutexLocker {
CGMutex &mRef;
public:
CGMutexLocker(CGMutex &mutex) : mRef(mutex) {
g_mutex_lock(&mRef);
}
~CGMutexLocker() {
g_mutex_unlock(&mRef);
}
};
class CSomeClass {
private:
gulong mSomeCounter;
CGMutex mSomeCounterLock;
public:
CSomeClass ();
};
CSomeClass::CSomeClass()
{
CGMutexLocker locker(mSomeCounterLock); // lock the mutex using the locker
mSomeCounter = 0;
}
The mSomeCounter initialization ensures that the counter gets initialized, otherwise it will have garbage.

Communication between 2 threads C++ UNIX

I need your help with wxWidgets. I have 2 threads (1 wxTimer and 1 wxThread), I need communicate between this 2 threads. I have a class that contains methods to read/write variable in this class. (Share Memory with this object)
My problem is: I instanciate with "new" this class in one thread but I don't know that necessary in second thread. Because if instanciate too, adress of variable are differents and I need communicate so I need even value in variable :/
I know about need wxSemaphore to prevent error when to access same time.
Thanks you for your help !
EDIT: My code
So, I need make a link with my code. Thanks you for all ;)
It's my declaration for my wxTimer in my class: EvtFramePrincipal (IHM)
In .h
EvtFramePrincipal( wxWindow* parent );
#include <wx/timer.h>
wxTimer m_timer;
in .cpp -Constructor EvtFramePrincipal
EvtFramePrincipal::EvtFramePrincipal( wxWindow* parent )
:
FramePrincipal( parent ),m_timer(this)
{
Connect(wxID_ANY,wxEVT_TIMER,wxTimerEventHandler(EvtFramePrincipal::OnTimer),NULL,this);
m_timer.Start(250);
}
So I call OnTimer method every 250ms with this line.
For my second thread start from EvtFramePrincipal (IHM):
in .h EvtFramePrincipal
#include "../Client.h"
Client *ClientIdle;
in .cpp EvtFramePrincipal
ClientIdle= new Client();
ClientIdle->Run();
In .h Client (Thread)
class Client: public wxThread
public:
Client();
virtual void *Entry();
virtual void OnExit();
In .cpp Client (Thread)
Client::Client() : wxThread()
{
}
So here, no probleme, thread are ok ?
Now I need that this class that use like a messenger between my 2 threads.
#ifndef PARTAGE_H
#define PARTAGE_H
#include "wx/string.h"
#include <iostream>
using std::cout;
using std::endl;
class Partage
{
public:
Partage();
virtual ~Partage();
bool Return_Capteur_Aval()
{ return Etat_Capteur_Aval; }
bool Return_Capteur_Amont()
{ return Etat_Capteur_Amont; }
bool Return_Etat_Barriere()
{ return Etat_Barriere; }
bool Return_Ouverture()
{ return Demande_Ouverture; }
bool Return_Fermeture()
{ return Demande_Fermeture; }
bool Return_Appel()
{ return Appel_Gardien; }
void Set_Ouverture(bool Etat)
{ Demande_Ouverture=Etat; }
void Set_Fermeture(bool Etat)
{ Demande_Fermeture=Etat; }
void Set_Capteur_Aval(bool Etat)
{ Etat_Capteur_Aval=Etat; }
void Set_Capteur_Amont(bool Etat)
{ Etat_Capteur_Amont=Etat; }
void Set_Barriere(bool Etat)
{ Etat_Barriere=Etat; }
void Set_Appel(bool Etat)
{ Appel_Gardien=Etat; }
void Set_Code(wxString valeur_code)
{ Code=valeur_code; }
void Set_Badge(wxString numero_badge)
{ Badge=numero_badge; }
void Set_Message(wxString message)
{
Message_Affiche=wxT("");
Message_Affiche=message;
}
wxString Get_Message()
{
return Message_Affiche;
}
wxString Get_Code()
{ return Code; }
wxString Get_Badge()
{ return Badge; }
protected:
private:
bool Etat_Capteur_Aval;
bool Etat_Capteur_Amont;
bool Etat_Barriere;
bool Demande_Ouverture;
bool Demande_Fermeture;
bool Appel_Gardien;
wxString Code;
wxString Badge;
wxString Message_Affiche;
};
#endif // PARTAGE_H
So in my EvtFramePrincipal(wxTimer), I make a new for this class. But in other thread (wxThread), what I need to do to communicate ?
If difficult to understand so sorry :/
Then main thread should create first the shared variable. After it, you can create both threads and pass them a pointer to the shared variable.
So, both of them, know how interact with the shared variable. You need to implement a mutex or wxSemaphore in the methods of the shared variable.
You can use a singleton to get access to a central object.
Alternatively, create the central object before creating the threads and pass the reference to the central object to threads.
Use a mutex in the central object to prevent simultaneous access.
Creating one central object on each thread is not an option.
EDIT 1: Adding more details and examples
Let's start with some assumptions. The OP indicated that
I have 2 threads (1 wxTimer and 1 wxThread)
To tell the truth, I know very little of the wxWidgets framework, but there's always the documentation. So I can see that:
wxTimer provides a Timer that will execute the wxTimer::Notify() method when the timer expires. The documentation doesn't say anything about thread-execution (although there's a note A timer can only be used from the main thread which I'm not sure how to understand). I can guess that we should expect the Notify method will be executed in some event-loop or timer-loop thread or threads.
wxThread provides a model for Thread execution, that runs the wxThread::Entry() method. Running a wxThread object will actually create a thread that runs the Entry method.
So your problem is that you need same object to be accessible in both wxTimer::Notify() and wxThread::Entry() methods.
This object:
It's not one variable but a lot of that store in one class
e.g.
struct SharedData {
// NOTE: This is very simplistic.
// since the information here will be modified/read by
// multiple threads, it should be protected by one or more
// mutexes
// so probably a class with getter/setters will be better suited
// so that access with mutexes can be enforced within the class.
SharedData():var2(0) { }
std::string var1;
int var2;
};
of which you have somewhere an instance of that:
std::shared_ptr<SharedData> myData=std::make_shared<SharedData>();
or perhaps in pointer form or perhaps as a local variable or object attribute
Option 1: a shared reference
You're not really using wxTimer or wxThread, but classes that inherit from them (at least the wxThread::Entry() is pure virtual. In the case of wxTimer you could change the owner to a different wxEvtHandler that will receive the event, but you still need to provide an implementation.
So you can have
class MyTimer: public wxTimer {
public:
void Notify() {
// Your code goes here
// but it can access data through the local reference
}
void setData(const std::shared_ptr<SharedData> &data) {
mLocalReference=data
}
private:
std::shared_ptr<SharedData> mLocalReferece
};
That will need to be set:
MyTimer timer;
timer.setData(myData);
timer.StartOnece(10000); // wake me up in 10 secs.
Similarly for the Thread
class MyThread: public wxThread {
public:
void Entry() {
// Your code goes here
// but it can access data through the local reference
}
void setData(const std::shared_ptr<SharedData> &data) {
mLocalReference=data
}
private:
std::shared_ptr<SharedData> mLocalReferece
};
That will need to be set:
MyThread *thread=new MyThread();
thread->setData(myData);
thread->Run(); // threads starts running.
Option2 Using a singleton.
Sometimes you cannot modify MyThread or MyTimer... or it is too difficult to route the reference to myData to the thread or timer instances... or you're just too lazy or too busy to bother (beware of your technical debt!!!)
We can tweak the SharedData into:
struct SharedData {
std::string var1;
int var2;
static SharedData *instance() {
// NOTE that some mutexes are needed here
// to prevent the case where first initialization
// is executed simultaneously from different threads
// allocating two objects, one of them leaked.
if(!sInstance) {
sInstance=new SharedData();
}
return sInstance
}
private:
SharedData():var2(0) { } // Note we've made the constructor private
static SharedData *sInstance=0;
};
This object (because it only allows the creation of a single object) can be accessed from
either MyTimer::Notify() or MyThread::Entry() with
SharedData::instance()->var1;
Interlude: why Singletons are evil
(or why the easy solution might bite you in the future).
What is so bad about singletons?
Why Singletons are Evil
Singletons Are Evil
My main reasons are:
There's one and only one instance... and you might think that you only need one now, but who knows what the future will hold, you've taken an easy solution for a coding problem that has far reaching consequences architecturally and that might be difficult to revert.
It will not allow doing dependency injection (because the actual class is used in the accessing the object).
Still, I don't think is something to completely avoid. It has its uses, it can solve your problem and it might save your day.
Option 3. Some middle ground.
You could still organize your data around a central repository with methods to access different instances (or different implementations) of the data.
This central repository can be a singleton (it is really is central, common and unique), but is not the shared data, but what is used to retrieve the shared data, e.g. identified by some ID (that might be easier to share between the threads using option 1)
Something like:
CentralRepository::instance()->getDataById(sharedId)->var1;
EDIT 2: Comments after OP posted (more) code ;)
It seems that your object EvtFramePrincipal will execute both the timer call back and it will contain the ClientIdle pointer to a Client object (the thread)... I'd do:
Make the Client class contain a Portage attribute (a pointer or a smart pointer).
Make the EvtFramePrincipal contain a Portage attribute (a pointer or smart pointer). I guess this will have the lifecycle of the whole application, so the Portage object can share that lifecycle too.
Add Mutexes locking to all methods setting and getting in the Portage attribute, since it can be accessed from multiple threads.
After the Client object is instantiated set the reference to the Portage object that the EvtFramePrincipal contains.
Client can access Portage because we've set its reference when it was created. When the Entry method is run in its thread it will be able to access it.
EvtFramePrincipal can access the Portage (because it is one of its attributes), so the event handler for the timer event will be able to access it.

Best way to protect a callback function from deconstructed classes

What would be a good/best to ensure thread safety for callback objects? Specifically, I'm trying to prevent a callback object from being deconstructed before all the threads are finished with it.
It is easy to code the client code to ensure thread safety, but I'm looking for a way that is a bit more streamlined. For example, using a factory object to generate the callback objects. The trouble then lies in tracking the usage of the callback object.
Below is an example code that I'm trying to improve.
class CHandlerCallback
{
public:
CHandlerCallback(){ ... };
virtual ~CHandlerCallback(){ ... };
virtual OnBegin(UINT nTotal ){ ... };
virtual OnStep (UINT nIncrmt){ ... };
virtual OnEnd(UINT nErrCode){ ... };
protected:
...
}
static DWORD WINAPI ThreadProc(LPVOID lpParameter)
{
CHandler* phandler = (CHandler*)lpParameter;
phandler ->ThreadProc();
return 0;
};
class CHandler
{
public:
CHandler(CHandlerCallback * sink = NULL) {
m_pSink = sink;
// Start the server thread. (ThreadProc)
};
~CHandler(){...};
VOID ThreadProc(LPVOID lpParameter) {
... do stuff
if (m_pSink) m_pSink->OnBegin(..)
while (not exit) {
... do stuff
if (m_pSink) m_pSink->OnStep(..)
... do stuff
}
if (m_pSink) m_pSink->OnEnd(..);
};
private:
CHandlerCallback * m_pSink;
}
class CSpecial1Callback: public CHandlerCallback
{
public:
CSpecial1Callback(){ ... };
virtual ~CBaseHandler(){ ... };
virtual OnStep (UINT nIncrmt){ ... };
}
class CSpecial2Callback: public CHandlerCallback...
Then the code that runs everything in a way similar to the following:
int main {
CSpecial2Callback* pCallback = new CSpecial2Callback();
CHandler handler(pCallback );
// Right now the client waits for CHandler to finish before deleting
// pCallback
}
Thanks!
If you're using C++11 you can use smart pointers to keep the object around until the last reference to the object disappears. See shared_pointer. If you're not in C++11 you could use boost's version. If you don't want to include that library and aren't in C++11 you can resort to keeping an internal count of threads using that object and destroy the object when that count reaches 0. Note that trying to track the counter yourself can be difficult as you'll need atomic updates to the counter.
shared_ptr<CSpecial2Callback> pCallback(new CSpecial2Callback());
CHandler handler(pCallback); // You'll need to change this to take a shared_ptr
... //Rest of code -- when the last reference to
... //pCallback is used up it will be destroyed.

How to handle failure to release a resource which is contained in a smart pointer?

How should an error during resource deallocation be handled, when the
object representing the resource is contained in a shared pointer?
EDIT 1:
To put this question in more concrete terms: Many C-style interfaces
have a function to allocate a resource, and one to release
it. Examples are open(2) and close(2) for file descriptors on POSIX
systems, XOpenDisplay and XCloseDisplay for a connection to an X
server, or sqlite3_open and sqlite3_close for a connection to an
SQLite database.
I like to encapsulate such interfaces in a C++ class, using the Pimpl
idiom to hide the implementation details, and providing a factory
method returning a shared pointer to ensure that the resource is
deallocated when no references to it remain.
But, in all the examples given above and many others, the function
used to release the resource may report an error. If this function is
called by the destructor, I cannot throw an exception because
generally destructors must not throw.
If, on the other hand, I provide a public method to release the
resource, I now have a class with two possible states: One in which
the resource is valid, and one in which the resource has already been
released. Not only does this complicate the implementation of the
class, it also opens a potential for wrong usage. This is bad, because
an interface should aim to make usage errors impossible.
I would be grateful for any help with this problem.
The original statement of the question, and thoughts about a possible
solution follow below.
EDIT 2:
There is now a bounty on this question. A solution must meet these
requirements:
The resource is released if and only if no references to it remain.
References to the resource may be destroyed explicitly. An exception is thrown if an error occured while releasing the resource.
It is not possible to use a resource which has already been released.
Reference counting and releasing of the resource are thread-safe.
A solution should meet these requirements:
It uses the shared pointer provided by boost, the C++ Technical Report 1 (TR1), and the upcoming C++ standard, C++0x.
It is generic. Resource classes only need to implement how the resource is released.
Thank you for your time and thoughts.
EDIT 3:
Thanks to everybody who answered my question.
Alsk's answer met everything asked for in the bounty, and
was accepted. In multithreaded code, this solution would require
a separate cleanup thread.
I have added another answer where any exceptions during
cleanup are thrown by the thread that actually used the resource,
without need for a separate cleanup thread. If you are still
interested in this problem (it bothered me a lot), please
comment.
Smart pointers are a useful tool to manage resources safely. Examples
of such resources are memory, disk files, database connections, or
network connections.
// open a connection to the local HTTP port
boost::shared_ptr<Socket> socket = Socket::connect("localhost:80");
In a typical scenario, the class encapsulating the resource should be
noncopyable and polymorphic. A good way to support this is to provide
a factory method returning a shared pointer, and declare all
constructors non-public. The shared pointers can now be copied from
and assigned to freely. The object is automatically destroyed when no
reference to it remains, and the destructor then releases the
resource.
/** A TCP/IP connection. */
class Socket
{
public:
static boost::shared_ptr<Socket> connect(const std::string& address);
virtual ~Socket();
protected:
Socket(const std::string& address);
private:
// not implemented
Socket(const Socket&);
Socket& operator=(const Socket&);
};
But there is a problem with this approach. The destructor must not
throw, so a failure to release the resource will remain undetected.
A common way out of this problem is to add a public method to release
the resource.
class Socket
{
public:
virtual void close(); // may throw
// ...
};
Unfortunately, this approach introduces another problem: Our objects
may now contain resources which have already been released. This
complicates the implementation of the resource class. Even worse, it
makes it possible for clients of the class to use it incorrectly. The
following example may seem far-fetched, but it is a common pitfall in
multi-threaded code.
socket->close();
// ...
size_t nread = socket->read(&buffer[0], buffer.size()); // wrong use!
Either we ensure that the resource is not released before the object
is destroyed, thereby losing any way to deal with a failed resource
deallocation. Or we provide a way to release the resource explicitly
during the object's lifetime, thereby making it possible to use the
resource class incorrectly.
There is a way out of this dilemma. But the solution involves using a
modified shared pointer class. These modifications are likely to be
controversial.
Typical shared pointer implementations, such as boost::shared_ptr,
require that no exception be thrown when their object's destructor is
called. Generally, no destructor should ever throw, so this is a
reasonable requirement. These implementations also allow a custom
deleter function to be specified, which is called in lieu of the
destructor when no reference to the object remains. The no-throw
requirement is extended to this custom deleter function.
The rationale for this requirement is clear: The shared pointer's
destructor must not throw. If the deleter function does not throw, nor
will the shared pointer's destructor. However, the same holds for
other member functions of the shared pointer which lead to resource
deallocation, e.g. reset(): If resource deallocation fails, no
exception can be thrown.
The solution proposed here is to allow custom deleter functions to
throw. This means that the modified shared pointer's destructor must
catch exceptions thrown by the deleter function. On the other hand,
member functions other than the destructor, e.g. reset(), shall not
catch exceptions of the deleter function (and their implementation
becomes somewhat more complicated).
Here is the original example, using a throwing deleter function:
/** A TCP/IP connection. */
class Socket
{
public:
static SharedPtr<Socket> connect(const std::string& address);
protected:
Socket(const std::string& address);
virtual Socket() { }
private:
struct Deleter;
// not implemented
Socket(const Socket&);
Socket& operator=(const Socket&);
};
struct Socket::Deleter
{
void operator()(Socket* socket)
{
// Close the connection. If an error occurs, delete the socket
// and throw an exception.
delete socket;
}
};
SharedPtr<Socket> Socket::connect(const std::string& address)
{
return SharedPtr<Socket>(new Socket(address), Deleter());
}
We can now use reset() to free the resource explicitly. If there is
still a reference to the resource in another thread or another part of
the program, calling reset() will only decrement the reference
count. If this is the last reference to the resource, the resource is
released. If resource deallocation fails, an exception is thrown.
SharedPtr<Socket> socket = Socket::connect("localhost:80");
// ...
socket.reset();
EDIT:
Here is a complete (but platform-dependent) implementation of the deleter:
struct Socket::Deleter
{
void operator()(Socket* socket)
{
if (close(socket->m_impl.fd) < 0)
{
int error = errno;
delete socket;
throw Exception::fromErrno(error);
}
delete socket;
}
};
We need to store allocated resources somewhere (as it was already mentioned by DeadMG) and explicitly call some reporting/throwing function outside of any destructor. But that doesn't prevent us from taking advantage of reference counting implemented in boost::shared_ptr.
/** A TCP/IP connection. */
class Socket
{
private:
//store internally every allocated resource here
static std::vector<boost::shared_ptr<Socket> > pool;
public:
static boost::shared_ptr<Socket> connect(const std::string& address)
{
//...
boost::shared_ptr<Socket> socket(new Socket(address));
pool.push_back(socket); //the socket won't be actually
//destroyed until we want it to
return socket;
}
virtual ~Socket();
//call cleanupAndReport() as often as needed
//probably, on a separate thread, or by timer
static void cleanupAndReport()
{
//find resources without clients
foreach(boost::shared_ptr<Socket>& socket, pool)
{
if(socket.unique()) //there are no clients for this socket, i.e.
//there are no shared_ptr's elsewhere pointing to this socket
{
//try to deallocate this resource
if (close(socket->m_impl.fd) < 0)
{
int error = errno;
socket.reset(); //destroys Socket object
//throw an exception or handle error in-place
//...
//throw Exception::fromErrno(error);
}
else
{
socket.reset();
}
}
} //foreach socket
}
protected:
Socket(const std::string& address);
private:
// not implemented
Socket(const Socket&);
Socket& operator=(const Socket&);
};
The implementation of cleanupAndReport() should be a little more complicated: in the present version the pool is populated with null pointers after cleanup, and in case of throwing exception we have to call the function until it doesn't throw anymore etc, but I hope, it illustrates well the idea.
Now, more general solution:
//forward declarations
template<class Resource>
boost::shared_ptr<Resource> make_shared_resource();
template<class Resource>
void cleanupAndReport(boost::function1<void,boost::shared_ptr<Resource> deallocator);
//for every type of used resource there will be a template instance with a static pool
template<class Resource>
class pool_holder
{
private:
friend boost::shared_ptr<Resource> make_shared_resource<Resource>();
friend void cleanupAndReport(boost::function1<void,boost::shared_ptr<Resource>);
static std::vector<boost::shared_ptr<Resource> > pool;
};
template<class Resource>
std::vector<boost::shared_ptr<Resource> > pool_holder<Resource>::pool;
template<class Resource>
boost::shared_ptr<Resource> make_shared_resource()
{
boost::shared_ptr<Resource> res(new Resource);
pool_holder<Resource>::pool.push_back(res);
return res;
}
template<class Resource>
void cleanupAndReport(boost::function1<void,boost::shared_ptr<Resource> > deallocator)
{
foreach(boost::shared_ptr<Resource>& res, pool_holder<Resource>::pool)
{
if(res.unique())
{
deallocator(res);
}
} //foreach
}
//usage
{
boost::shared_ptr<A> a = make_shared_resource<A>();
boost::shared_ptr<A> a2 = make_shared_resource<A>();
boost::shared_ptr<B> b = make_shared_resource<B>();
//...
}
cleanupAndReport<A>(deallocate_A);
cleanupAndReport<B>(deallocate_B);
If releasing some resource can actually fail, then a destructor is clearly a wrong abstraction to use. Destructors are meant to clean up without fail, regardless of the circumstances. A close() method (or whatever you want to name it) is probably the only way to go.
But think closely about it. If releasing a resource actually fails, what can you do? Is such an error recoverable? If it is, which part of your code should handle it? The way to recover is probably highly application-specific and tied to other parts of the application. It is highly unlikely that you actually want that to happen automatically, in an arbitrary place in the code that happened to release the resource and trigger the error. A shared pointer abstraction does not really model what you're trying to achieve. If so, then you clearly need to create your own abstraction which models your requested behavior. Abusing shared pointers to do something they're not supposed to do is not the right way.
Also, please read this.
EDIT:
If all you want to do is to inform the user what happened before crashing, then consider wrapping the Socket in another wrapper object that would call the deleter on its destruction, catch any exceptions thrown and handle them by showing the user a message box or whatever. Then put this wrapper object inside a boost::shared_ptr.
Quoting Herb Sutter, author of "Exceptional C++" (from here):
If a destructor throws an exception,
Bad Things can happen. Specifically,
consider code like the following:
// The problem
//
class X {
public:
~X() { throw 1; }
};
void f() {
X x;
throw 2;
} // calls X::~X (which throws), then calls terminate()
If a destructor throws an exception
while another exception is already
active (i.e., during stack unwinding),
the program is terminated. This is
usually not a good thing.
In other words, regardless of what you would want to believe is elegant in this situation, you cannot blithely throw an exception in a destructor unless you can guarantee that it will not be thrown while handling another exception.
Besides, what can you do if you can't successfully get rid of a resource? Exceptions should be thrown for things that can be handled higher up, not bugs. If you want to report odd behavior, log the release failure and simply go on. Or terminate.
As announced in the question, edit 3:
Here is another solution which, as far as I can judge, fulfills the
requirements in the question. It is similar to the solution described
in the original question, but uses boost::shared_ptr instead of a
custom smart pointer.
The central idea of this solution is to provide a release()
operation on shared_ptr. If we can make the shared_ptr give up its
ownership, we are free to call a cleanup function, delete the object,
and throw an exception in case an error occurred during cleanup.
Boost has a good
reason
to not provide a release() operation on shared_ptr:
shared_ptr cannot give away ownership unless it's unique() because the
other copy will still destroy the object.
Consider:
shared_ptr<int> a(new int);
shared_ptr<int> b(a); // a.use_count() == b.use_count() == 2
int * p = a.release();
// Who owns p now? b will still call delete on it in its destructor.
Furthermore, the pointer returned by release() would be difficult to
deallocate reliably, as the source shared_ptr could have been created
with a custom deleter.
The first argument against a release() operation is that, by the
nature of shared_ptr, many pointers share ownership of the object,
so no single one of them can simply release that ownership. But what
if the release() function returned a null pointer if there were
still other references left? The shared_ptr can reliably determine
this, without race conditions.
The second argument against the release() operation is that, if a
custom deleter was passed to the shared_ptr, you should use that to
deallocate the object, rather than simply deleting it. But release()
could return a function object, in addition to the raw pointer, to
enable its caller to deallocate the pointer reliably.
However, in our specific szenario, custom deleters will not be an
issue, because we do not have to deal with arbitrary custom
deleters. This will become clearer from the code given below.
Providing a release() operation on shared_ptr without modifying
its implementation is, of course, not possible without a hack. The
hack which is used in the code below relies on a thread-local variable
to prevent our custom deleter from actually deleting the object.
That said, here's the code, consisting mostly of the header
Resource.hpp, plus a small implementation file Resource.cpp. Note
that it must be linked with -lboost_thread-mt due to the
thread-local variable.
// ---------------------------------------------------------------------
// Resource.hpp
// ---------------------------------------------------------------------
#include <boost/assert.hpp>
#include <boost/ref.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/thread/tss.hpp>
/// Factory for a resource.
template<typename T>
struct ResourceFactory
{
/// Create a resource.
static boost::shared_ptr<T>
create()
{
return boost::shared_ptr<T>(new T, ResourceFactory());
}
template<typename A1>
static boost::shared_ptr<T>
create(const A1& a1)
{
return boost::shared_ptr<T>(new T(a1), ResourceFactory());
}
template<typename A1, typename A2>
static boost::shared_ptr<T>
create(const A1& a1, const A2& a2)
{
return boost::shared_ptr<T>(new T(a1, a2), ResourceFactory());
}
// ...
/// Destroy a resource.
static void destroy(boost::shared_ptr<T>& resource);
/// Deleter for boost::shared_ptr<T>.
void operator()(T* resource);
};
namespace impl
{
// ---------------------------------------------------------------------
/// Return the last reference to the resource, or zero. Resets the pointer.
template<typename T>
T* release(boost::shared_ptr<T>& resource);
/// Return true if the resource should be deleted (thread-local).
bool wantDelete();
// ---------------------------------------------------------------------
} // namespace impl
template<typename T>
inline
void ResourceFactory<T>::destroy(boost::shared_ptr<T>& ptr)
{
T* resource = impl::release(ptr);
if (resource != 0) // Is it the last reference?
{
try
{
resource->close();
}
catch (...)
{
delete resource;
throw;
}
delete resource;
}
}
// ---------------------------------------------------------------------
template<typename T>
inline
void ResourceFactory<T>::operator()(T* resource)
{
if (impl::wantDelete())
{
try
{
resource->close();
}
catch (...)
{
}
delete resource;
}
}
namespace impl
{
// ---------------------------------------------------------------------
/// Flag in thread-local storage.
class Flag
{
public:
~Flag()
{
m_ptr.release();
}
Flag& operator=(bool value)
{
if (value != static_cast<bool>(*this))
{
if (value)
{
m_ptr.reset(s_true); // may throw boost::thread_resource_error!
}
else
{
m_ptr.release();
}
}
return *this;
}
operator bool()
{
return m_ptr.get() == s_true;
}
private:
boost::thread_specific_ptr<char> m_ptr;
static char* s_true;
};
// ---------------------------------------------------------------------
/// Flag to prevent deletion.
extern Flag t_nodelete;
// ---------------------------------------------------------------------
/// Return the last reference to the resource, or zero.
template<typename T>
T* release(boost::shared_ptr<T>& resource)
{
try
{
BOOST_ASSERT(!t_nodelete);
t_nodelete = true; // may throw boost::thread_resource_error!
}
catch (...)
{
t_nodelete = false;
resource.reset();
throw;
}
T* rv = resource.get();
resource.reset();
return wantDelete() ? rv : 0;
}
// ---------------------------------------------------------------------
} // namespace impl
And the implementation file:
// ---------------------------------------------------------------------
// Resource.cpp
// ---------------------------------------------------------------------
#include "Resource.hpp"
namespace impl
{
// ---------------------------------------------------------------------
bool wantDelete()
{
bool rv = !t_nodelete;
t_nodelete = false;
return rv;
}
// ---------------------------------------------------------------------
Flag t_nodelete;
// ---------------------------------------------------------------------
char* Flag::s_true((char*)0x1);
// ---------------------------------------------------------------------
} // namespace impl
And here is an example of a resource class implemented using this solution:
// ---------------------------------------------------------------------
// example.cpp
// ---------------------------------------------------------------------
#include "Resource.hpp"
#include <cstdlib>
#include <string>
#include <stdexcept>
#include <iostream>
// uncomment to test failed resource allocation, usage, and deallocation
//#define TEST_CREAT_FAILURE
//#define TEST_USAGE_FAILURE
//#define TEST_CLOSE_FAILURE
// ---------------------------------------------------------------------
/// The low-level resource type.
struct foo { char c; };
// ---------------------------------------------------------------------
/// The low-level function to allocate the resource.
foo* foo_open()
{
#ifdef TEST_CREAT_FAILURE
return 0;
#else
return (foo*) std::malloc(sizeof(foo));
#endif
}
// ---------------------------------------------------------------------
/// Some low-level function using the resource.
int foo_use(foo*)
{
#ifdef TEST_USAGE_FAILURE
return -1;
#else
return 0;
#endif
}
// ---------------------------------------------------------------------
/// The low-level function to free the resource.
int foo_close(foo* foo)
{
std::free(foo);
#ifdef TEST_CLOSE_FAILURE
return -1;
#else
return 0;
#endif
}
// ---------------------------------------------------------------------
/// The C++ wrapper around the low-level resource.
class Foo
{
public:
void use()
{
if (foo_use(m_foo) < 0)
{
throw std::runtime_error("foo_use");
}
}
protected:
Foo()
: m_foo(foo_open())
{
if (m_foo == 0)
{
throw std::runtime_error("foo_open");
}
}
void close()
{
if (foo_close(m_foo) < 0)
{
throw std::runtime_error("foo_close");
}
}
private:
foo* m_foo;
friend struct ResourceFactory<Foo>;
};
// ---------------------------------------------------------------------
typedef ResourceFactory<Foo> FooFactory;
// ---------------------------------------------------------------------
/// Main function.
int main()
{
try
{
boost::shared_ptr<Foo> resource = FooFactory::create();
resource->use();
FooFactory::destroy(resource);
}
catch (const std::exception& e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
Finally, here is a small Makefile to build all that:
# Makefile
CXXFLAGS = -g -Wall
example: example.cpp Resource.hpp Resource.o
$(CXX) $(CXXFLAGS) -o example example.cpp Resource.o -lboost_thread-mt
Resource.o: Resource.cpp Resource.hpp
$(CXX) $(CXXFLAGS) -c Resource.cpp -o Resource.o
clean:
rm -f Resource.o example
Well, first off, I don't see a question here. Second off, I have to say that this is a bad idea. What will you gain in all this? When the last shared pointer to a resource is destroyed and your throwing deleter is called you will find yourself with a resource leak. You will have lost all handles to the resource that failed to release. You will never be able to try again.
Your desire to use an RAII object is a good one but a smart pointer is simply insufficient to the task. What you need needs to be even smarter. You need something that can rebuild itself on failure to completely collapse. The destructor is insufficient for such an interface.
You do introduce yourself to the misuse where someone could cause a resource to have a handle but be invalid. The type of resource you're dealing with here simply lends itself to this issue. There are many ways in which you may approach this. One method may be to use the handle/body idiom along with the state pattern. The implementation of the interface can be in one of two states: connected or unconnected. The handle simply passes requests to the internal body/state. Connected works like normal, unconnected throws exceptions/asserts in all applicable requests.
This thing would need a function other than ~ to destroy a handle to it. You could consider a destroy() function that can throw. If you catch an error when you call it you don't delete the handle but instead deal with the problem in whatever application specific way you need to. If you don't catch an error from destroy() you let the handle go out of scope, reset it, or whatever. The function destroy() then decriments the resource count and attempts to release the internal resource if that count is 0. Upon success the handle in switched to the unconnected state, upon failure it generates a catchable error that the client can attempt to handle but leaves the handle in a connected state.
It's not an entirely trivial thing to write but what you are wanting to do, introduce exceptions into destruction, simply will not work.
Generally speaking, if a resource's C-style closure fails, then it's a problem with the API rather than a problem in your code. However, what I would be tempted to do is, if destruction is failed, add it to a list of resources that need destruction/cleanup re-attempted later, say, when app exits, periodically, or when other similar resources are destroyed, and then try to re-destroy. If any are left over at arbitrary time, give user error and exit.

Destruction of singleton in DLL

I’m trying to create a simple Win32 DLL. As interface between DLL and EXE I use C functions, but inside of DLL i use C++ singleton object. Following is an example of my DLL implementation:
// MyDLLInterface.cpp file --------------------
#include "stdafx.h"
#include <memory>
#include "MyDLLInterface.h"
class MySingleton
{
friend class std::auto_ptr< MySingleton >;
static std::auto_ptr< MySingleton > m_pInstance;
MySingleton()
{
m_pName = new char[32];
strcpy(m_pName, “MySingleton”);
}
virtual ~ MySingleton()
{
delete [] m_pName;
}
MySingleton(const MySingleton&);
MySingleton& operator=(const MySingleton&);
public:
static MySingleton* Instance()
{
if (!m_pInstance.get())
m_pInstance.reset(new MySingleton);
return m_pInstance.get();
}
static void Delete()
{
m_pInstance.reset(0);
}
void Function() {}
private:
char* m_pName;
};
std::auto_ptr<MySingleton> MySingleton::m_pInstance(0);
void MyInterfaceFunction()
{
MySingleton::Instance()->Function();
}
void MyInterfaceUninitialize()
{
MySingleton::Delete();
}
// MyDLLInterface.h file --------------------
#if defined(MY_DLL)
#define MY_DLL_EXPORT __declspec(dllexport)
#else
#define MY_DLL_EXPORT __declspec(dllimport)
#endif
MY_DLL_EXPORT void MyInterfaceFunction();
MY_DLL_EXPORT void MyInterfaceUninitialize();
The problem or the question that i have is following: If i don't call MyInterfaceUninitialize() from my EXEs ExitInstance(), i have a memory leak (m_pName pointer). Why does it happening? It looks like the destruction off MySingleton happens after EXEs exit. Is it possible to force the DLL or EXE to destroy the MySingleton a little bit earlier, so I don't need to call MyInterfaceUninitialize() function?
EDIT:
Thanks for all your help and explanation. Now i understand that this is a design issue. If i want to stay with my current solution, i need to call MyInterfaceUninitialize() function in my EXE. If i don't do it, it also OK, because the singleton destroys itself, when it leaves the EXE scope (but i need to live with disturbing debugger messages). The only way to avoid this behavior, is to rethink the whole implementation.
I can also set my DLL as "Delay Loaded DLLs" under Linker->Input in Visual Studio, to get rid of disturbing debugger messages.
If i don't call MyInterfaceUninitialize() from my EXEs ExitInstance(), i have a memory leak (m_pName pointer). Why does it happening?
This is not a leak, this is the way auto_ptrs are supposed to work. They release the instance when they go out of scope (which in your case is when the dll is unloaded).
It looks like the destruction off MySingleton happens after EXEs exit.
Yes.
Is it possible to force the DLL or EXE to destroy the MySingleton a little bit earlier, so I don't need to call MyInterfaceUninitialize() function?
Not without calling this function.
You can take advantage of the DllMain callback function to take appropriate action when the DLL is loaded/unloaded or a process/thread attaches/detaches. You could then allocate objects per attached process/thread instead of using a singleton since this callback function is executed in the context of the attached thread. With that in mind, also take a look at Thread Local Storage (TLS).
Honestly, for the example you gave, it doesn't matter if you call the Uninitialize method from your ExitInstance. Yes, the debugger will complain about the unreleased memory, but then again, it's a singleton, it's intended to live for an extended duration.
Only if you have some state information in the DLL that needs to be persisted at exit, or if you are dynamically loading/unloading DLLs multiple times, do you need to be diligent about cleaning up. Otherwise, just letting the OS tear down the process at exit is just fine, the reported memory leak is inconsequential at that point.