c, c++ memory in shared libraries - c++

I am uncertain how static global memory is managed in DLL's and shared objects. I do not know if each handles it in the same way or different on different platforms.
Consider you have a library of classes, one of those classes is a mutex class and other classes in the library will use that mutex. What is the best or safest way to allocate a mutex in the library? I can see a couple options:
Make the mutex private in the class. This I can't see working because the mutex life would only be valid within the lifetime of the object. Maybe making the object a singleton and initializing it when the library is loaded (with dllattach or attribute((constructor))) would work, I am not sure.
Allocate the mutex outside of the class in static global space of the library. I think this would be the best option but what exactly happens when the DLL is loaded? If I made an object static and global in a library when does it get allocated, where in the program does it get allocated? What happens if the library is loaded during runtime as opposed to when the program starts?
Any information about this is greatly appreciated!

The way memory is managed in shared images depends on specific platforms, and DLLs are specific to Microsoft Windows.
Generally, you should always avoid using global/shared static variables, as they may introduce serious problems or bugs which are hard to identify or resolve. Even singleton classes may cause several issues in C++, specially in libraries or multi-threaded applications. (And generally, using singletons are not considered good even in higher level languages.)
For guarding against mutual exclusion race conditions, the best option would be to use a scoped lock class implemented using RAII technique, alongside the shared_ptr smart pointer, which automates memory allocation and de-allocation.
The below code illustrates implementing Mutex by using Windows API and the above techniques (as well as Pimpl idiom):
// Mutex.h
#pragma once
#include <memory>
class Mutex
{
public:
typedef unsigned long milliseconds;
Mutex();
~Mutex();
void Lock();
void Unlock();
bool TryLock();
bool TimedLock(milliseconds ms);
private:
struct private_data;
std::shared_ptr<private_data> data;
// Actual data is hold in private_data struct which is non-accessible.
// We only hold a "copyable handle" to it through the shared_ptr, which
// prevents copying this "actual data" object by, say, assignment operators.
// So, private_data's destructor automatically gets called only when the last
// Mutex object leaves its scope.
};
// Mutex.cpp
#include "Mutex.h"
#include <windows.h>
struct Mutex::private_data
{
HANDLE hMutex;
private_data()
{
hMutex = CreateMutex(NULL, FALSE, NULL);
}
~private_data()
{
// Unlock(); ?? :/
CloseHandle(hMutex);
}
};
Mutex::Mutex()
: data (new private_data())
{ }
Mutex::~Mutex()
{ }
void Mutex::Lock()
{
DWORD ret = WaitForSingleObject(data->hMutex, INFINITE);
ASSERT(ret == WAIT_OBJECT_0);
}
void Mutex::Unlock()
{
ReleaseMutex(data->hMutex);
}
bool Mutex::TryLock()
{
DWORD ret = WaitForSingleObject(data->hMutex, 0);
ASSERT(ret != WAIT_ABANDONED);
ASSERT(ret != WAIT_FAILED);
return ret != WAIT_TIMEOUT;
}
bool Mutex::TimedLock(milliseconds ms)
{
DWORD ret = WaitForSingleObject(data->hMutex, static_cast<DWORD>(ms));
ASSERT(ret != WAIT_ABANDONED);
ASSERT(ret != WAIT_FAILED);
return ret != WAIT_TIMEOUT;
}
// ScopedLock.h
#pragma once
#include "Mutex.h"
class ScopedLock
{
private:
Mutex& m_mutex;
ScopedLock(ScopedLock const&); // disable copy constructor
ScopedLock& operator= (ScopedLock const&); // disable assignment operator
public:
ScopedLock(Mutex& mutex)
: m_mutex(mutex)
{ m_mutex.Lock(); }
~ScopedLock()
{ m_mutex.Unlock(); }
};
Sample usage:
Mutex m1;
MyClass1 o1;
MyClass2 o2;
...
{
ScopedLock lock(m1);
// thread-safe operations
o1.Decrease();
o2.Increase();
} // lock is released automatically here upon leaving scope
// non-thread-safe operations
o1.Decrease();
o2.Increase();
While the above code will give you the basic idea, even the better option is to use high-quality C++ libraries like boost, which have mutex, scoped_lock and many other classes already available. (And fortunately C++11 makes complete coverage of synchronization classes, freeing you from having to use boost libraries.)
UPDATE:
I suggest you to search for topics about automatic garbage collection in C++ as well as the RAII technique.

Related

In c++, Any cross thread synchronisation method that cause major overhead only on write?

I want to build scheduling system that provides a lookup of thread-local resource.
The lookup is read very often but is updated only when a new thread is add.
using atomic/mutex causes quite an overhead equally for both read & write.
Are there any way to shift the overhead to write and make read fast?
For example, a lookup of asio::io_context
class Foo {
public:
static Foo* instance;
std::unordered_map<int/*custom thread id*/, asio::io_context* > io_contexts
};
enum class threads:int{
network, file_io, others_that_may_spawn_after_init_phase...
};
void some_func_called_in_random_thread() {
//need an unwanted lock here
asio::post(Foo::instance->io_contexts[threads::network], xxx);
asio::post(Foo::instance->io_contexts[threads::file_io], yyy);
}
void init_network(){
auto ptr = new asio::io_contexts();
//need a lock here
Foo::instance->io_contexts[trd_network] = ptr;
}
...

Synchronizing method calls on shared object from multiple threads

I am thinking about how to implement a class that will contain private data that will be eventually be modified by multiple threads through method calls. For synchronization (using the Windows API), I am planning on using a CRITICAL_SECTION object since all the threads will spawn from the same process.
Given the following design, I have a few questions.
template <typename T> class Shareable
{
private:
const LPCRITICAL_SECTION sync; //Can be read and used by multiple threads
T *data;
public:
Shareable(LPCRITICAL_SECTION cs, unsigned elems) : sync{cs}, data{new T[elems]} { }
~Shareable() { delete[] data; }
void sharedModify(unsigned index, T &datum) //<-- Can this be validly called
//by multiple threads with synchronization being implicit?
{
EnterCriticalSection(sync);
/*
The critical section of code involving reads & writes to 'data'
*/
LeaveCriticalSection(sync);
}
};
// Somewhere else ...
DWORD WINAPI ThreadProc(LPVOID lpParameter)
{
Shareable<ActualType> *ptr = static_cast<Shareable<ActualType>*>(lpParameter);
T copyable = /* initialization */;
ptr->sharedModify(validIndex, copyable); //<-- OK, synchronized?
return 0;
}
The way I see it, the API calls will be conducted in the context of the current thread. That is, I assume this is the same as if I had acquired the critical section object from the pointer and called the API from within ThreadProc(). However, I am worried that if the object is created and placed in the main/initial thread, there will be something funky about the API calls.
When sharedModify() is called on the same object concurrently,
from multiple threads, will the synchronization be implicit, in the
way I described it above?
Should I instead get a pointer to the
critical section object and use that instead?
Is there some other
synchronization mechanism that is better suited to this scenario?
When sharedModify() is called on the same object concurrently, from multiple threads, will the synchronization be implicit, in the way I described it above?
It's not implicit, it's explicit. There's only only CRITICAL_SECTION and only one thread can hold it at a time.
Should I instead get a pointer to the critical section object and use that instead?
No. There's no reason to use a pointer here.
Is there some other synchronization mechanism that is better suited to this scenario?
It's hard to say without seeing more code, but this is definitely the "default" solution. It's like a singly-linked list -- you learn it first, it always works, but it's not always the best choice.
When sharedModify() is called on the same object concurrently, from multiple threads, will the synchronization be implicit, in the way I described it above?
Implicit from the caller's perspective, yes.
Should I instead get a pointer to the critical section object and use that instead?
No. In fact, I would suggest giving the Sharable object ownership of its own critical section instead of accepting one from the outside (and embrace RAII concepts to write safer code), eg:
template <typename T>
class Shareable
{
private:
CRITICAL_SECTION sync;
std::vector<T> data;
struct SyncLocker
{
CRITICAL_SECTION &sync;
SyncLocker(CRITICAL_SECTION &cs) : sync(cs) { EnterCriticalSection(&sync); }
~SyncLocker() { LeaveCriticalSection(&sync); }
}
public:
Shareable(unsigned elems) : data(elems)
{
InitializeCriticalSection(&sync);
}
Shareable(const Shareable&) = delete;
Shareable(Shareable&&) = delete;
~Shareable()
{
{
SyncLocker lock(sync);
data.clear();
}
DeleteCriticalSection(&sync);
}
void sharedModify(unsigned index, const T &datum)
{
SyncLocker lock(sync);
data[index] = datum;
}
Shareable& operator=(const Shareable&) = delete;
Shareable& operator=(Shareable&&) = delete;
};
Is there some other synchronization mechanism that is better suited to this scenario?
That depends. Will multiple threads be accessing the same index at the same time? If not, then there is not really a need for the critical section at all. One thread can safely access one index while another thread accesses a different index.
If multiple threads need to access the same index at the same time, a critical section might still not be the best choice. Locking the entire array might be a big bottleneck if you only need to lock portions of the array at a time. Things like the Interlocked API, or Slim Read/Write locks, might make more sense. It really depends on your thread designs and what you are actually trying to protect.

C++/MFC/ATL Thread-Safe String read/write

I have a MFC class with threads launched and the threads need to modify CString members of the main class.
I hate mutex locks, so there must be a an easier way to do this.
I am thinking to use the boost.org library or atl::atomic or shared_ptr variables.
What is the best method of reading and writting the string and be thread safe?
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
CString m_strInfo;
};
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
CString strTmp=m_strInfo; // this may cause crash
}
UINT MyClass::MyThread(LPVOID pArg)
{
MyClass pClass=(MyClass*)pArd;
pClass->m_strInfo=_T("New Value"); // non thread-safe change
}
According to MSDN shared_ptr works automatically https://msdn.microsoft.com/en-us/library/bb982026.aspx
So is this a better method?
#include <memory>
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
std::shared_ptr<CString> m_strInfo; // ********
};
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
CString strTmp=m_strInfo; // this may cause crash
}
UINT MyClass::MyThread(LPVOID pArg)
{
MyClass pClass=(MyClass*)pArd;
shared_ptr<CString> newValue(new CString());
newValue->SetString(_T("New Value"));
pClass->m_strInfo=newValue; // thread-safe change?
}
You could implement some kind of lockless way to achieve that, but it depends on how you use MyClass and your thread. If your thread is processing some data and after processing it, it need to update MyClass, then consider putting your string data in some other class ex.:
struct StringData {
CString m_strInfo;
};
then inside your MyClass:
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
StringData* m_pstrData;
StringData* m_pstrDataForThreads;
};
now, the idea is that in your ie. main thread code you use m_pstrData, but you need to use atomics to store local pointer to it ie.:
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
StringData* m_pstrDataTemp = ATOMIC_READ(m_pstrData);
if ( m_pstrDataTemp )
CString strTmp=m_pstrDataTemp->m_strInfo; // this may NOT cause crash
}
once your thread finished processing data, and wants to update string, you will atomically assign m_pstrDataForThreads to m_pstrData, and allocate new m_pstrDataForThreads,
The problem is with how to safely delete m_pstrData, I suppose you could use here std::shared_ptr.
In the end it looks kind of complicated and IMO not really worth the effort, at least it is hard to tell if this is really thread safe, and when code will get more complicated - it will still be thread safe. Also this is for single worker thread case, and You say you have multiple threads. Thats why critical section is a starting point, and if it is too slow then think of using lockless approach.
btw. depending on how often you string data is updated you could also think about using PostMessage to safely pass a pointer to new string, to your main thread.
[edit]
ATOMIC_MACRO does not exists, its just a place holder to make it compile use ie. c++11 atomics, example below:
#include <atomic>
...
std::atomic<uint64_t> sharedValue(0);
sharedValue.store(123, std::memory_order_relaxed); // atomically store
uint64_t ret = sharedValue.load(std::memory_order_relaxed); // atomically read
std::cout << ret;
I would have used simpler approach by protecting the variable with a SetStrInfo:
void SetStrInfo(const CString& str)
{
[Lock-here]
m_strInfo = str;
[Unlock-here]
}
For locking and unlocking we may use CCriticalSection (member of class), or wrap it around CSingleLock RAII. We may also use slim-reader writer locks for performance reasons (wrap with RAII - write a simple class). We may also use newer C++ techniques for RAII locking/unlocking.
Call me old-school, but for me std namespace has complicated set of options - doesn't suit everything, and everyone.

Thread safe container

There is some exemplary class of container in pseudo code:
class Container
{
public:
Container(){}
~Container(){}
void add(data new)
{
// addition of data
}
data get(size_t which)
{
// returning some data
}
void remove(size_t which)
{
// delete specified object
}
private:
data d;
};
How this container can be made thread safe? I heard about mutexes - where these mutexes should be placed? Should mutex be static for a class or maybe in global scope? What is good library for this task in C++?
First of all mutexes should not be static for a class as long as you going to use more than one instance. There is many cases where you should or shouldn't use use them. So without seeing your code it's hard to say. Just remember, they are used to synchronise access to shared data. So it's wise to place them inside methods that modify or rely on object's state. In your case I would use one mutex to protect whole object and lock all three methods. Like:
class Container
{
public:
Container(){}
~Container(){}
void add(data new)
{
lock_guard<Mutex> lock(mutex);
// addition of data
}
data get(size_t which)
{
lock_guard<Mutex> lock(mutex);
// getting copy of value
// return that value
}
void remove(size_t which)
{
lock_guard<Mutex> lock(mutex);
// delete specified object
}
private:
data d;
Mutex mutex;
};
Intel Thread Building Blocks (TBB) provides a bunch of thread-safe container implementations for C++. It has been open sourced, you can download it from: http://threadingbuildingblocks.org/ver.php?fid=174 .
First: sharing mutable state between threads is hard. You should be using a library that has been audited and debugged.
Now that it is said, there are two different functional issue:
you want a container to provide safe atomic operations
you want a container to provide safe multiple operations
The idea of multiple operations is that multiple accesses to the same container must be executed successively, under the control of a single entity. They require the caller to "hold" the mutex for the duration of the transaction so that only it changes the state.
1. Atomic operations
This one appears simple:
add a mutex to the object
at the start of each method grab a mutex with a RAII lock
Unfortunately it's also plain wrong.
The issue is re-entrancy. It is likely that some methods will call other methods on the same object. If those once again attempt to grab the mutex, you get a dead lock.
It is possible to use re-entrant mutexes. They are a bit slower, but allow the same thread to lock a given mutex as much as it wants. The number of unlocks should match the number of locks, so once again, RAII.
Another approach is to use dispatching methods:
class C {
public:
void foo() { Lock lock(_mutex); foo_impl(); }]
private:
void foo_impl() { /* do something */; foo_impl(); }
};
The public methods are simple forwarders to private work-methods and simply lock. Then one just have to ensure that private methods never take the mutex...
Of course there are risks of accidentally calling a locking method from a work-method, in which case you deadlock. Read on to avoid this ;)
2. Multiple operations
The only way to achieve this is to have the caller hold the mutex.
The general method is simple:
add a mutex to the container
provide a handle on this method
cross your fingers that the caller will never forget to hold the mutex while accessing the class
I personally prefer a much saner approach.
First, I create a "bundle of data", which simply represents the class data (+ a mutex), and then I provide a Proxy, in charge of grabbing the mutex. The data is locked so that the proxy only may access the state.
class ContainerData {
protected:
friend class ContainerProxy;
Mutex _mutex;
void foo();
void bar();
private:
// some data
};
class ContainerProxy {
public:
ContainerProxy(ContainerData& data): _data(data), _lock(data._mutex) {}
void foo() { data.foo(); }
void bar() { foo(); data.bar(); }
};
Note that it is perfectly safe for the Proxy to call its own methods. The mutex will be released automatically by the destructor.
The mutex can still be reentrant if multiple Proxies are desired. But really, when multiple proxies are involved, it generally turns into a mess. In debug mode, it's also possible to add a "check" that the mutex is not already held by this thread (and assert if it is).
3. Reminder
Using locks is error-prone. Deadlocks are a common cause of error and occur as soon as you have two mutexes (or one and re-entrancy). When possible, prefer using higher level alternatives.
Add mutex as an instance variable of class. Initialize it in constructor, and lock it at the very begining of every method, including destructor, and unlock at the end of method. Adding global mutex for all instances of class (static member or just in gloabl scope) may be a performance penalty.
The is also a very nice collection of lock-free containers (including maps) by Max Khiszinsky
LibCDS1 Concurrent Data Structures
Here is the documentation page:
http://libcds.sourceforge.net/doc/index.html
It can be kind of intimidating to get started, because it is fully generic and requires you register a chosen garbage collection strategy and initialize that. Of course, the threading library is configurable and you need to initialize that as well :)
See the following links for some getting started info:
initialization of CDS and the threading manager
http://sourceforge.net/projects/libcds/forums/forum/1034512/topic/4600301/
the unit tests ((cd build && ./build.sh ----debug-test for debug build)
Here is base template for 'main':
#include <cds/threading/model.h> // threading manager
#include <cds/gc/hzp/hzp.h> // Hazard Pointer GC
int main()
{
// Initialize \p CDS library
cds::Initialize();
// Initialize Garbage collector(s) that you use
cds::gc::hzp::GarbageCollector::Construct();
// Attach main thread
// Note: it is needed if main thread can access to libcds containers
cds::threading::Manager::attachThread();
// Do some useful work
...
// Finish main thread - detaches internal control structures
cds::threading::Manager::detachThread();
// Terminate GCs
cds::gc::hzp::GarbageCollector::Destruct();
// Terminate \p CDS library
cds::Terminate();
}
Don't forget to attach any additional threads you are using:
#include <cds/threading/model.h>
int myThreadFunc(void *)
{
// initialize libcds thread control structures
cds::threading::Manager::attachThread();
// Now, you can work with GCs and libcds containers
....
// Finish working thread
cds::threading::Manager::detachThread();
}
1 (not to be confuse with Google's compact datastructures library)

Detecting when an object is passed to a new thread in C++?

I have an object for which I'd like to track the number of threads that reference it. In general, when any method on the object is called I can check a thread local boolean value to determine whether the count has been updated for the current thread. But this doesn't help me if the user say, uses boost::bind to bind my object to a boost::function and uses that to start a boost::thread. The new thread will have a reference to my object, and may hold on to it for an indefinite period of time before calling any of its methods, thus leading to a stale count. I could write my own wrapper around boost::thread to handle this, but that doesn't help if the user boost::bind's an object that contains my object (I can't specialize based on the presence of a member type -- at least I don't know of any way to do that) and uses that to start a boost::thread.
Is there any way to do this? The only means I can think of requires too much work from users -- I provide a wrapper around boost::thread that calls a special hook method on the object being passed in provided it exists, and users add the special hook method to any class that contains my object.
Edit: For the sake of this question we can assume I control the means to make new threads. So I can wrap boost::thread for example and expect that users will use my wrapped version, and not have to worry about users simultaneously using pthreads, etc.
Edit2: One can also assume that I have some means of thread local storage available, through __thread or boost::thread_specific_ptr. It's not in the current standard, but hopefully will be soon.
In general, this is hard. The question of "who has a reference to me?" is not generally solvable in C++. It may be worth looking at the bigger picture of the specific problem(s) you are trying to solve, and seeing if there is a better way.
There are a few things I can come up with that can get you partway there, but none of them are quite what you want.
You can establish the concept of "the owning thread" for an object, and REJECT operations from any other thread, a la Qt GUI elements. (Note that trying to do things thread-safely from threads other than the owner won't actually give you thread-safety, since if the owner isn't checked it can collide with other threads.) This at least gives your users fail-fast behavior.
You can encourage reference counting by having the user-visible objects being lightweight references to the implementation object itself [and by documenting this!]. But determined users can work around this.
And you can combine these two-- i.e. you can have the notion of thread ownership for each reference, and then have the object become aware of who owns the references. This could be very powerful, but not really idiot-proof.
You can start restricting what users can and cannot do with the object, but I don't think covering more than the obvious sources of unintentional error is worthwhile. Should you be declaring operator& private, so people can't take pointers to your objects? Should you be preventing people from dynamically allocating your object? It depends on your users to some degree, but keep in mind you can't prevent references to objects, so eventually playing whack-a-mole will drive you insane.
So, back to my original suggestion: re-analyze the big picture if possible.
Short of a pimpl style implementation that does a threadid check before every dereference I don't see how you could do this:
class MyClass;
class MyClassImpl {
friend class MyClass;
threadid_t owning_thread;
public:
void doSomethingThreadSafe();
void doSomethingNoSafetyCheck();
};
class MyClass {
MyClassImpl* impl;
public:
void doSomethine() {
if (__threadid() != impl->owning_thread) {
impl->doSomethingThreadSafe();
} else {
impl->doSomethingNoSafetyCheck();
}
}
};
Note: I know the OP wants to list threads with active pointers, I don't think that's feasible. The above implementation at least lets the object know when there might be contention. When to change the owning_thread depends heavily on what doSomething does.
Usually you cannot do this programmatically.
Unfortuately, the way to go is to design your program in such a way that you can prove (i.e. convince yourself) that certain objects are shared, and others are thread private.
The current C++ standard does not even have the notion of a thread, so there is no standard portable notion of thread local storage, in particular.
If I understood your problem correctly I believe this could be done in Windows using Win32 function GetCurrentThreadId().
Below is a quick and dirty example of how it could be used. Thread synchronisation should rather be done with a lock object.
If you create an object of CMyThreadTracker at the top of every member function of your object to be tracked for threads, the _handle_vector should contain the thread ids that use your object.
#include <process.h>
#include <windows.h>
#include <vector>
#include <algorithm>
#include <functional>
using namespace std;
class CMyThreadTracker
{
vector<DWORD> & _handle_vector;
DWORD _h;
CRITICAL_SECTION &_CriticalSection;
public:
CMyThreadTracker(vector<DWORD> & handle_vector,CRITICAL_SECTION &crit):_handle_vector(handle_vector),_CriticalSection(crit)
{
EnterCriticalSection(&_CriticalSection);
_h = GetCurrentThreadId();
_handle_vector.push_back(_h);
printf("thread id %08x\n",_h);
LeaveCriticalSection(&_CriticalSection);
}
~CMyThreadTracker()
{
EnterCriticalSection(&_CriticalSection);
vector<DWORD>::iterator ee = remove_if(_handle_vector.begin(),_handle_vector.end(),bind2nd(equal_to<DWORD>(), _h));
_handle_vector.erase(ee,_handle_vector.end());
LeaveCriticalSection(&_CriticalSection);
}
};
class CMyObject
{
vector<DWORD> _handle_vector;
public:
void method1(CRITICAL_SECTION & CriticalSection)
{
CMyThreadTracker tt(_handle_vector,CriticalSection);
printf("method 1\n");
EnterCriticalSection(&CriticalSection);
for(int i=0;i<_handle_vector.size();++i)
{
printf(" this object is currently used by thread %08x\n",_handle_vector[i]);
}
LeaveCriticalSection(&CriticalSection);
}
};
CMyObject mo;
CRITICAL_SECTION CriticalSection;
unsigned __stdcall ThreadFunc( void* arg )
{
unsigned int sleep_time = *(unsigned int*)arg;
while ( true)
{
Sleep(sleep_time);
mo.method1(CriticalSection);
}
_endthreadex( 0 );
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hThread;
unsigned int threadID;
if (!InitializeCriticalSectionAndSpinCount(&CriticalSection, 0x80000400) )
return -1;
for(int i=0;i<5;++i)
{
unsigned int sleep_time = 1000 *(i+1);
hThread = (HANDLE)_beginthreadex( NULL, 0, &ThreadFunc, &sleep_time, 0, &threadID );
printf("creating thread %08x\n",threadID);
}
WaitForSingleObject( hThread, INFINITE );
return 0;
}
EDIT1:
As mentioned in the comment, reference dispensing could be implemented as below. A vector could hold the unique thread ids referring to your object. You may also need to implement a custom assignment operator to deal with the object references being copied by a different thread.
class MyClass
{
public:
static MyClass & Create()
{
static MyClass * p = new MyClass();
return *p;
}
static void Destroy(MyClass * p)
{
delete p;
}
private:
MyClass(){}
~MyClass(){};
};
class MyCreatorClass
{
MyClass & _my_obj;
public:
MyCreatorClass():_my_obj(MyClass::Create())
{
}
MyClass & GetObject()
{
//TODO:
// use GetCurrentThreadId to get thread id
// check if the id is already in the vector
// add this to a vector
return _my_obj;
}
~MyCreatorClass()
{
MyClass::Destroy(&_my_obj);
}
};
int _tmain(int argc, _TCHAR* argv[])
{
MyCreatorClass mcc;
MyClass &o1 = mcc.GetObject();
MyClass &o2 = mcc.GetObject();
return 0;
}
The solution I'm familiar with is to state "if you don't use the correct API to interact with this object, then all bets are off."
You may be able to turn your requirements around and make it possible for any threads that reference the object subscribe to signals from the object. This won't help with race conditions, but allows threads to know when the object has unloaded itself (for instance).
To solve the problem "I have an object and want to know how many threads access it" and you also can enumerate your threads, you can solve this problem with thread local storage.
Allocate a TLS index for your object. Make a private method called "registerThread" which simply sets the thread TLS to point to your object.
The key extension to the poster's original idea is that during every method call, call this registerThread(). Then you don't need to detect when or who created the thread, it's just set (often redundantly) during every actual access.
To see which threads have accessed the object, just examine their TLS values.
Upside: simple and pretty efficient.
Downside: solves the posted question but doesn't extend smoothly to multiple objects or dynamic threads that aren't enumerable.