C++/MFC/ATL Thread-Safe String read/write - c++

I have a MFC class with threads launched and the threads need to modify CString members of the main class.
I hate mutex locks, so there must be a an easier way to do this.
I am thinking to use the boost.org library or atl::atomic or shared_ptr variables.
What is the best method of reading and writting the string and be thread safe?
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
CString m_strInfo;
};
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
CString strTmp=m_strInfo; // this may cause crash
}
UINT MyClass::MyThread(LPVOID pArg)
{
MyClass pClass=(MyClass*)pArd;
pClass->m_strInfo=_T("New Value"); // non thread-safe change
}
According to MSDN shared_ptr works automatically https://msdn.microsoft.com/en-us/library/bb982026.aspx
So is this a better method?
#include <memory>
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
std::shared_ptr<CString> m_strInfo; // ********
};
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
CString strTmp=m_strInfo; // this may cause crash
}
UINT MyClass::MyThread(LPVOID pArg)
{
MyClass pClass=(MyClass*)pArd;
shared_ptr<CString> newValue(new CString());
newValue->SetString(_T("New Value"));
pClass->m_strInfo=newValue; // thread-safe change?
}

You could implement some kind of lockless way to achieve that, but it depends on how you use MyClass and your thread. If your thread is processing some data and after processing it, it need to update MyClass, then consider putting your string data in some other class ex.:
struct StringData {
CString m_strInfo;
};
then inside your MyClass:
class MyClass
{
public:
void MyClass();
static UINT MyThread(LPVOID pArg);
StringData* m_pstrData;
StringData* m_pstrDataForThreads;
};
now, the idea is that in your ie. main thread code you use m_pstrData, but you need to use atomics to store local pointer to it ie.:
void MyClass::MyClass()
{
AfxBeginThread(MyThread, this);
StringData* m_pstrDataTemp = ATOMIC_READ(m_pstrData);
if ( m_pstrDataTemp )
CString strTmp=m_pstrDataTemp->m_strInfo; // this may NOT cause crash
}
once your thread finished processing data, and wants to update string, you will atomically assign m_pstrDataForThreads to m_pstrData, and allocate new m_pstrDataForThreads,
The problem is with how to safely delete m_pstrData, I suppose you could use here std::shared_ptr.
In the end it looks kind of complicated and IMO not really worth the effort, at least it is hard to tell if this is really thread safe, and when code will get more complicated - it will still be thread safe. Also this is for single worker thread case, and You say you have multiple threads. Thats why critical section is a starting point, and if it is too slow then think of using lockless approach.
btw. depending on how often you string data is updated you could also think about using PostMessage to safely pass a pointer to new string, to your main thread.
[edit]
ATOMIC_MACRO does not exists, its just a place holder to make it compile use ie. c++11 atomics, example below:
#include <atomic>
...
std::atomic<uint64_t> sharedValue(0);
sharedValue.store(123, std::memory_order_relaxed); // atomically store
uint64_t ret = sharedValue.load(std::memory_order_relaxed); // atomically read
std::cout << ret;

I would have used simpler approach by protecting the variable with a SetStrInfo:
void SetStrInfo(const CString& str)
{
[Lock-here]
m_strInfo = str;
[Unlock-here]
}
For locking and unlocking we may use CCriticalSection (member of class), or wrap it around CSingleLock RAII. We may also use slim-reader writer locks for performance reasons (wrap with RAII - write a simple class). We may also use newer C++ techniques for RAII locking/unlocking.
Call me old-school, but for me std namespace has complicated set of options - doesn't suit everything, and everyone.

Related

Using boost::mutex as a private member of class

I have a class that contains a boost::mutex as a private member. It becomes locked when you call one of its public functions and unlocks when the function exits. This is to provide synchronous access to the object's internals.
class StringDeque
{
boost::mutex mtx;
std::deque<string> string_deque;
public:
StringDeque() { }
void addToDeque(const string& str_to_add)
{
boost::lock_guard<boost::mutex> guard(mtx);
string_deque.push(str_to_add);
}
string popFromDeque()
{
boost::lock_guard<boost::mutex> guard(mtx);
string popped_string = string_deque.front();
string_deque.pop();
return popped_string;
}
};
This class isn't meant to be particularly useful but I am just playing around with mutexes and threads.
I have a main() that also has another function defined that pops strings from the class and prints them in a thread. It will repeat this 10 times and then return from the function. Once again, this is purely for testing purposes. It looks like this:
void printTheStrings(StringDeque& str_deque)
{
int i = 0;
while(i < 10)
{
string popped_string = str_deque.popFromDeque();
if(popped_string.empty())
{
sleep(1);
continue;
}
cout << popped_string << endl;
++i;
}
}
int main()
{
StringDeque str_deque;
boost::thread the_thread(printTheStrings, str_deque);
str_deque.addToDeque("Say your prayers");
str_deque.addToDeque("Little One");
str_deque.addToDeque("And Don't forget My Son");
str_deque.addToDeque("To include everyone");
str_deque.addToDeque("I tuck you in");
str_deque.addToDeque("Warm within");
str_deque.addToDeque("Keep you free from sin");
str_deque.addToDeque("Until the sandman he comes");
str_deque.addToDeque("Sleep with one eye open");
str_deque.addToDeque("Gripping your pillow tight");
the_thread.join();
}
The error I keep getting is that boost::mutex is noncopyable. The printTheStrings() function takes a reference so I am a little confused as to why this is trying to copy the object.
I have read up a bit on this and one solution I keep reading is to make the boost::mutex a static private member of the object. However, this defeats the purpose of my mutex since I want it to be on an object-by-object basis rather than a class variable.
Is this just bad use of mutexes? Should I just be rethinking this entire application?
EDIT:
I just discovered condition_variable which should serve my purpose a lot better to have the thread wait until there is something actually in the deque before waking up to pop from the deque and print it. All the examples that I see define these mutexes and condition_variable objects at a global scope. This seems very... not object-oriented in my opinion. Even the examples straight from Boost themselves show that it is done in this way. Is this really how other people use these objects?
You are correct that printToString takes the StringQueue by reference. Your problem is that boost::thread take its arguments by value. To force it to take the arguments by reference you will need to modify things to:
boost::thread the_thread(printTheStrings, boost::ref(str_deque));
As an aside, from C++11 onwards, threads are part of the standard library. You should probably use std::thread instead

c, c++ memory in shared libraries

I am uncertain how static global memory is managed in DLL's and shared objects. I do not know if each handles it in the same way or different on different platforms.
Consider you have a library of classes, one of those classes is a mutex class and other classes in the library will use that mutex. What is the best or safest way to allocate a mutex in the library? I can see a couple options:
Make the mutex private in the class. This I can't see working because the mutex life would only be valid within the lifetime of the object. Maybe making the object a singleton and initializing it when the library is loaded (with dllattach or attribute((constructor))) would work, I am not sure.
Allocate the mutex outside of the class in static global space of the library. I think this would be the best option but what exactly happens when the DLL is loaded? If I made an object static and global in a library when does it get allocated, where in the program does it get allocated? What happens if the library is loaded during runtime as opposed to when the program starts?
Any information about this is greatly appreciated!
The way memory is managed in shared images depends on specific platforms, and DLLs are specific to Microsoft Windows.
Generally, you should always avoid using global/shared static variables, as they may introduce serious problems or bugs which are hard to identify or resolve. Even singleton classes may cause several issues in C++, specially in libraries or multi-threaded applications. (And generally, using singletons are not considered good even in higher level languages.)
For guarding against mutual exclusion race conditions, the best option would be to use a scoped lock class implemented using RAII technique, alongside the shared_ptr smart pointer, which automates memory allocation and de-allocation.
The below code illustrates implementing Mutex by using Windows API and the above techniques (as well as Pimpl idiom):
// Mutex.h
#pragma once
#include <memory>
class Mutex
{
public:
typedef unsigned long milliseconds;
Mutex();
~Mutex();
void Lock();
void Unlock();
bool TryLock();
bool TimedLock(milliseconds ms);
private:
struct private_data;
std::shared_ptr<private_data> data;
// Actual data is hold in private_data struct which is non-accessible.
// We only hold a "copyable handle" to it through the shared_ptr, which
// prevents copying this "actual data" object by, say, assignment operators.
// So, private_data's destructor automatically gets called only when the last
// Mutex object leaves its scope.
};
// Mutex.cpp
#include "Mutex.h"
#include <windows.h>
struct Mutex::private_data
{
HANDLE hMutex;
private_data()
{
hMutex = CreateMutex(NULL, FALSE, NULL);
}
~private_data()
{
// Unlock(); ?? :/
CloseHandle(hMutex);
}
};
Mutex::Mutex()
: data (new private_data())
{ }
Mutex::~Mutex()
{ }
void Mutex::Lock()
{
DWORD ret = WaitForSingleObject(data->hMutex, INFINITE);
ASSERT(ret == WAIT_OBJECT_0);
}
void Mutex::Unlock()
{
ReleaseMutex(data->hMutex);
}
bool Mutex::TryLock()
{
DWORD ret = WaitForSingleObject(data->hMutex, 0);
ASSERT(ret != WAIT_ABANDONED);
ASSERT(ret != WAIT_FAILED);
return ret != WAIT_TIMEOUT;
}
bool Mutex::TimedLock(milliseconds ms)
{
DWORD ret = WaitForSingleObject(data->hMutex, static_cast<DWORD>(ms));
ASSERT(ret != WAIT_ABANDONED);
ASSERT(ret != WAIT_FAILED);
return ret != WAIT_TIMEOUT;
}
// ScopedLock.h
#pragma once
#include "Mutex.h"
class ScopedLock
{
private:
Mutex& m_mutex;
ScopedLock(ScopedLock const&); // disable copy constructor
ScopedLock& operator= (ScopedLock const&); // disable assignment operator
public:
ScopedLock(Mutex& mutex)
: m_mutex(mutex)
{ m_mutex.Lock(); }
~ScopedLock()
{ m_mutex.Unlock(); }
};
Sample usage:
Mutex m1;
MyClass1 o1;
MyClass2 o2;
...
{
ScopedLock lock(m1);
// thread-safe operations
o1.Decrease();
o2.Increase();
} // lock is released automatically here upon leaving scope
// non-thread-safe operations
o1.Decrease();
o2.Increase();
While the above code will give you the basic idea, even the better option is to use high-quality C++ libraries like boost, which have mutex, scoped_lock and many other classes already available. (And fortunately C++11 makes complete coverage of synchronization classes, freeing you from having to use boost libraries.)
UPDATE:
I suggest you to search for topics about automatic garbage collection in C++ as well as the RAII technique.

Is this an acceptable way to lock a container using C++?

I need to implement (in C++) a thread safe container in such a way that only one thread is ever able to add or remove items from the container. I have done this kind of thing before by sharing a mutex between threads. This leads to a lot of mutex objects being littered throughout my code and makes things very messy and hard to maintain.
I was wondering if there is a neater and more object oriented way to do this. I thought of the following simple class wrapper around the container (semi-pseudo C++ code)
class LockedList {
private:
std::list<MyClass> m_List;
public:
MutexObject Mutex;
};
so that locking could be done in the following way
LockedList lockableList; //create instance
lockableList.Mutex.Lock(); // Lock object
... // search and add or remove items
lockableList.Mutex.Unlock(); // Unlock object
So my question really is to ask if this is a good approach from a design perspective? I know that allowing public access to members is frowned upon from a design perspective, does the above design have any serious flaws in it. If so is there a better way to implement thread safe container objects?
I have read a lot of books on design and C++ in general but there really does seem to be a shortage of literature regarding multithreaded programming and multithreaded software design.
If the above is a poor approach to solving the problem I have could anyone suggest a way to improve it, or point me towards some information that explains good ways to design classes to be thread safe??? Many thanks.
I would rather design a resourece owner that locks a mutex and returns an object that can be used by the thread. Once the thread has finished with it and stops using the object the resource is automatically returned to its owner and the lock released.
template<typename Resource>
class ResourceOwner
{
Lock lock;
Resource resource;
public:
ResourceHolder<Resource> getExclusiveAccess()
{
// Let the ResourceHolder lock and unlock the lock
// So while a thread holds a copy of this object only it
// can access the resource. Once the thread releases all
// copies then the lock is released allowing another
// thread to call getExclusiveAccess().
//
// Make it behave like a form of smart pointer
// 1) So you can pass it around.
// 2) So all properties of the resource are provided via ->
// 3) So the lock is automatically released when the thread
// releases the object.
return ResourceHolder<Resource>(lock, resource);
}
};
The resource holder (not thought hard so this can be improved)
template<typename Resource>
class ResourceHolder<
{
// Use a shared_ptr to hold the scopped lock
// When first created will lock the lock. When the shared_ptr
// destroyes the scopped lock (after all copies are gone)
// this will unlock the lock thus allowding other to use
// getExclusiveAccess() on the owner
std::shared_ptr<scopped_lock> locker;
Resource& resource; // local reference on the resource.
public:
ResourceHolder(Lock& lock, Resource& r)
: locker(new scopped_lock(lock))
, resource(r)
{}
// Access to the resource via the -> operator
// Thus allowing you to use all normal functionality of
// the resource.
Resource* operator->() {return &resource;}
};
Now a lockable list is:
ResourceOwner<list<int>> lockedList;
void threadedCode()
{
ResourceHolder<list<int>> list = lockedList.getExclusiveAccess();
list->push_back(1);
}
// When list goes out of scope here.
// It is destroyed and the the member locker will unlock `lock`
// in its destructor thus allowing the next thread to call getExclusiveAccess()
I would do something like this to make it more exception-safe by using RAII.
class LockedList {
private:
std::list<MyClass> m_List;
MutexObject Mutex;
friend class LockableListLock;
};
class LockableListLock {
private:
LockedList& list_;
public:
LockableListLock(LockedList& list) : list_(list) { list.Mutex.Lock(); }
~LockableListLock(){ list.Mutex.Unlock(); }
}
You would use it like this
LockableList list;
{
LockableListLock lock(list); // The list is now locked.
// do stuff to the list
} // The list is automatically unlocked when lock goes out of scope.
You could also make the class force you to lock it before doing anything with it by adding wrappers around the interface for std::list in LockableListLock so instead of accessing the list through the LockedList class, you would access the list through the LockableListLock class. For instance, you would make this wrapper around std::list::begin()
std::list::iterator LockableListLock::begin() {
return list_.m_List.begin();
}
and then use it like this
LockableList list;
LockableListLock lock(list);
// list.begin(); //This is a compiler error so you can't
//access the list without locking it
lock.begin(); // This gets you the beginning of the list
Okay, I'll state a little more directly what others have already implied: at least part, and quite possibly all, of this design is probably not what you want. At the very least, you want RAII-style locking.
I'd also make the locked (or whatever you prefer to call it) a template, so you can decouple the locking from the container itself.
// C++ like pesudo-code. Not intended to compile as-is.
struct mutex {
void lock() { /* ... */ }
void unlock() { /* ... */ }
};
struct lock {
lock(mutex &m) { m.lock(); }
~lock(mutex &m) { m.unlock(); }
};
template <class container>
class locked {
typedef container::value_type value_type;
typedef container::reference_type reference_type;
// ...
container c;
mutex m;
public:
void push_back(reference_type const t) {
lock l(m);
c.push_back(t);
}
void push_front(reference_type const t) {
lock l(m);
c.push_front(t);
}
// etc.
};
This makes the code fairly easy to write and (for at least some cases) still get correct behavior -- e.g., where your single-threaded code might look like:
std::vector<int> x;
x.push_back(y);
...your thread-safe code would look like:
locked<std::vector<int> > x;
x.push_back(y);
Assuming you provide the usual begin(), end(), push_front, push_back, etc., your locked<container> will still be usable like a normal container, so it works with standard algorithms, iterators, etc.
The problem with this approach is that it makes LockedList non-copyable. For details on this snag, please look at this question:
Designing a thread-safe copyable class
I have tried various things over the years, and a mutex declared beside the the container declaration always turns out to be the simplest way to go ( once all the bugs have been fixed after naively implementing other methods ).
You do not need to 'litter' your code with mutexes. You just need one mutex, declared beside the container it guards.
It's hard to say that the coarse grain locking is a bad design decision. We'd need to know about the system that the code lives in to talk about that. It is a good starting point if you don't know that it won't work however. Do the simplest thing that could possibly work first.
You could improve that code by making it less likely to fail if you scope without unlocking though.
struct ScopedLocker {
ScopedLocker(MutexObject &mo_) : mo(mo_) { mo.Lock(); }
~ScopedLocker() { mo.Unlock(); }
MutexObject &mo;
};
You could also hide the implementation from users.
class LockedList {
private:
std::list<MyClass> m_List;
MutexObject Mutex;
public:
struct ScopedLocker {
ScopedLocker(LockedList &ll);
~ScopedLocker();
};
};
Then you just pass the locked list to it without them having to worry about details of the MutexObject.
You can also have the list handle all the locking internally, which is alright in some cases. The design issue is iteration. If the list locks internally, then operations like this are much worse than letting the user of the list decide when to lock.
void foo(LockedList &list) {
for (size_t i = 0; i < 100000000; i++) {
list.push_back(i);
}
}
Generally speaking, it's a hard topic to give advice on because of problems like this. More often than not, it's more about how you use an object. There are a lot of leaky abstractions when you try and write code that solves multi-processor programming. That is why you see more toolkits that let people compose the solution that meets their needs.
There are books that discuss multi-processor programming, though they are few. With all the new C++11 features coming out, there should be more literature coming within the next few years.
I came up with this (which I'm sure can be improved to take more than two arguments):
template<class T1, class T2>
class combine : public T1, public T2
{
public:
/// We always need a virtual destructor.
virtual ~combine() { }
};
This allows you to do:
// Combine an std::mutex and std::map<std::string, std::string> into
// a single instance.
combine<std::mutex, std::map<std::string, std::string>> mapWithMutex;
// Lock the map within scope to modify the map in a thread-safe way.
{
// Lock the map.
std::lock_guard<std::mutex> locked(mapWithMutex);
// Modify the map.
mapWithMutex["Person 1"] = "Jack";
mapWithMutex["Person 2"] = "Jill";
}
If you wish to use an std::recursive_mutex and an std::set, that would also work.

Thread safe container

There is some exemplary class of container in pseudo code:
class Container
{
public:
Container(){}
~Container(){}
void add(data new)
{
// addition of data
}
data get(size_t which)
{
// returning some data
}
void remove(size_t which)
{
// delete specified object
}
private:
data d;
};
How this container can be made thread safe? I heard about mutexes - where these mutexes should be placed? Should mutex be static for a class or maybe in global scope? What is good library for this task in C++?
First of all mutexes should not be static for a class as long as you going to use more than one instance. There is many cases where you should or shouldn't use use them. So without seeing your code it's hard to say. Just remember, they are used to synchronise access to shared data. So it's wise to place them inside methods that modify or rely on object's state. In your case I would use one mutex to protect whole object and lock all three methods. Like:
class Container
{
public:
Container(){}
~Container(){}
void add(data new)
{
lock_guard<Mutex> lock(mutex);
// addition of data
}
data get(size_t which)
{
lock_guard<Mutex> lock(mutex);
// getting copy of value
// return that value
}
void remove(size_t which)
{
lock_guard<Mutex> lock(mutex);
// delete specified object
}
private:
data d;
Mutex mutex;
};
Intel Thread Building Blocks (TBB) provides a bunch of thread-safe container implementations for C++. It has been open sourced, you can download it from: http://threadingbuildingblocks.org/ver.php?fid=174 .
First: sharing mutable state between threads is hard. You should be using a library that has been audited and debugged.
Now that it is said, there are two different functional issue:
you want a container to provide safe atomic operations
you want a container to provide safe multiple operations
The idea of multiple operations is that multiple accesses to the same container must be executed successively, under the control of a single entity. They require the caller to "hold" the mutex for the duration of the transaction so that only it changes the state.
1. Atomic operations
This one appears simple:
add a mutex to the object
at the start of each method grab a mutex with a RAII lock
Unfortunately it's also plain wrong.
The issue is re-entrancy. It is likely that some methods will call other methods on the same object. If those once again attempt to grab the mutex, you get a dead lock.
It is possible to use re-entrant mutexes. They are a bit slower, but allow the same thread to lock a given mutex as much as it wants. The number of unlocks should match the number of locks, so once again, RAII.
Another approach is to use dispatching methods:
class C {
public:
void foo() { Lock lock(_mutex); foo_impl(); }]
private:
void foo_impl() { /* do something */; foo_impl(); }
};
The public methods are simple forwarders to private work-methods and simply lock. Then one just have to ensure that private methods never take the mutex...
Of course there are risks of accidentally calling a locking method from a work-method, in which case you deadlock. Read on to avoid this ;)
2. Multiple operations
The only way to achieve this is to have the caller hold the mutex.
The general method is simple:
add a mutex to the container
provide a handle on this method
cross your fingers that the caller will never forget to hold the mutex while accessing the class
I personally prefer a much saner approach.
First, I create a "bundle of data", which simply represents the class data (+ a mutex), and then I provide a Proxy, in charge of grabbing the mutex. The data is locked so that the proxy only may access the state.
class ContainerData {
protected:
friend class ContainerProxy;
Mutex _mutex;
void foo();
void bar();
private:
// some data
};
class ContainerProxy {
public:
ContainerProxy(ContainerData& data): _data(data), _lock(data._mutex) {}
void foo() { data.foo(); }
void bar() { foo(); data.bar(); }
};
Note that it is perfectly safe for the Proxy to call its own methods. The mutex will be released automatically by the destructor.
The mutex can still be reentrant if multiple Proxies are desired. But really, when multiple proxies are involved, it generally turns into a mess. In debug mode, it's also possible to add a "check" that the mutex is not already held by this thread (and assert if it is).
3. Reminder
Using locks is error-prone. Deadlocks are a common cause of error and occur as soon as you have two mutexes (or one and re-entrancy). When possible, prefer using higher level alternatives.
Add mutex as an instance variable of class. Initialize it in constructor, and lock it at the very begining of every method, including destructor, and unlock at the end of method. Adding global mutex for all instances of class (static member or just in gloabl scope) may be a performance penalty.
The is also a very nice collection of lock-free containers (including maps) by Max Khiszinsky
LibCDS1 Concurrent Data Structures
Here is the documentation page:
http://libcds.sourceforge.net/doc/index.html
It can be kind of intimidating to get started, because it is fully generic and requires you register a chosen garbage collection strategy and initialize that. Of course, the threading library is configurable and you need to initialize that as well :)
See the following links for some getting started info:
initialization of CDS and the threading manager
http://sourceforge.net/projects/libcds/forums/forum/1034512/topic/4600301/
the unit tests ((cd build && ./build.sh ----debug-test for debug build)
Here is base template for 'main':
#include <cds/threading/model.h> // threading manager
#include <cds/gc/hzp/hzp.h> // Hazard Pointer GC
int main()
{
// Initialize \p CDS library
cds::Initialize();
// Initialize Garbage collector(s) that you use
cds::gc::hzp::GarbageCollector::Construct();
// Attach main thread
// Note: it is needed if main thread can access to libcds containers
cds::threading::Manager::attachThread();
// Do some useful work
...
// Finish main thread - detaches internal control structures
cds::threading::Manager::detachThread();
// Terminate GCs
cds::gc::hzp::GarbageCollector::Destruct();
// Terminate \p CDS library
cds::Terminate();
}
Don't forget to attach any additional threads you are using:
#include <cds/threading/model.h>
int myThreadFunc(void *)
{
// initialize libcds thread control structures
cds::threading::Manager::attachThread();
// Now, you can work with GCs and libcds containers
....
// Finish working thread
cds::threading::Manager::detachThread();
}
1 (not to be confuse with Google's compact datastructures library)

Detecting when an object is passed to a new thread in C++?

I have an object for which I'd like to track the number of threads that reference it. In general, when any method on the object is called I can check a thread local boolean value to determine whether the count has been updated for the current thread. But this doesn't help me if the user say, uses boost::bind to bind my object to a boost::function and uses that to start a boost::thread. The new thread will have a reference to my object, and may hold on to it for an indefinite period of time before calling any of its methods, thus leading to a stale count. I could write my own wrapper around boost::thread to handle this, but that doesn't help if the user boost::bind's an object that contains my object (I can't specialize based on the presence of a member type -- at least I don't know of any way to do that) and uses that to start a boost::thread.
Is there any way to do this? The only means I can think of requires too much work from users -- I provide a wrapper around boost::thread that calls a special hook method on the object being passed in provided it exists, and users add the special hook method to any class that contains my object.
Edit: For the sake of this question we can assume I control the means to make new threads. So I can wrap boost::thread for example and expect that users will use my wrapped version, and not have to worry about users simultaneously using pthreads, etc.
Edit2: One can also assume that I have some means of thread local storage available, through __thread or boost::thread_specific_ptr. It's not in the current standard, but hopefully will be soon.
In general, this is hard. The question of "who has a reference to me?" is not generally solvable in C++. It may be worth looking at the bigger picture of the specific problem(s) you are trying to solve, and seeing if there is a better way.
There are a few things I can come up with that can get you partway there, but none of them are quite what you want.
You can establish the concept of "the owning thread" for an object, and REJECT operations from any other thread, a la Qt GUI elements. (Note that trying to do things thread-safely from threads other than the owner won't actually give you thread-safety, since if the owner isn't checked it can collide with other threads.) This at least gives your users fail-fast behavior.
You can encourage reference counting by having the user-visible objects being lightweight references to the implementation object itself [and by documenting this!]. But determined users can work around this.
And you can combine these two-- i.e. you can have the notion of thread ownership for each reference, and then have the object become aware of who owns the references. This could be very powerful, but not really idiot-proof.
You can start restricting what users can and cannot do with the object, but I don't think covering more than the obvious sources of unintentional error is worthwhile. Should you be declaring operator& private, so people can't take pointers to your objects? Should you be preventing people from dynamically allocating your object? It depends on your users to some degree, but keep in mind you can't prevent references to objects, so eventually playing whack-a-mole will drive you insane.
So, back to my original suggestion: re-analyze the big picture if possible.
Short of a pimpl style implementation that does a threadid check before every dereference I don't see how you could do this:
class MyClass;
class MyClassImpl {
friend class MyClass;
threadid_t owning_thread;
public:
void doSomethingThreadSafe();
void doSomethingNoSafetyCheck();
};
class MyClass {
MyClassImpl* impl;
public:
void doSomethine() {
if (__threadid() != impl->owning_thread) {
impl->doSomethingThreadSafe();
} else {
impl->doSomethingNoSafetyCheck();
}
}
};
Note: I know the OP wants to list threads with active pointers, I don't think that's feasible. The above implementation at least lets the object know when there might be contention. When to change the owning_thread depends heavily on what doSomething does.
Usually you cannot do this programmatically.
Unfortuately, the way to go is to design your program in such a way that you can prove (i.e. convince yourself) that certain objects are shared, and others are thread private.
The current C++ standard does not even have the notion of a thread, so there is no standard portable notion of thread local storage, in particular.
If I understood your problem correctly I believe this could be done in Windows using Win32 function GetCurrentThreadId().
Below is a quick and dirty example of how it could be used. Thread synchronisation should rather be done with a lock object.
If you create an object of CMyThreadTracker at the top of every member function of your object to be tracked for threads, the _handle_vector should contain the thread ids that use your object.
#include <process.h>
#include <windows.h>
#include <vector>
#include <algorithm>
#include <functional>
using namespace std;
class CMyThreadTracker
{
vector<DWORD> & _handle_vector;
DWORD _h;
CRITICAL_SECTION &_CriticalSection;
public:
CMyThreadTracker(vector<DWORD> & handle_vector,CRITICAL_SECTION &crit):_handle_vector(handle_vector),_CriticalSection(crit)
{
EnterCriticalSection(&_CriticalSection);
_h = GetCurrentThreadId();
_handle_vector.push_back(_h);
printf("thread id %08x\n",_h);
LeaveCriticalSection(&_CriticalSection);
}
~CMyThreadTracker()
{
EnterCriticalSection(&_CriticalSection);
vector<DWORD>::iterator ee = remove_if(_handle_vector.begin(),_handle_vector.end(),bind2nd(equal_to<DWORD>(), _h));
_handle_vector.erase(ee,_handle_vector.end());
LeaveCriticalSection(&_CriticalSection);
}
};
class CMyObject
{
vector<DWORD> _handle_vector;
public:
void method1(CRITICAL_SECTION & CriticalSection)
{
CMyThreadTracker tt(_handle_vector,CriticalSection);
printf("method 1\n");
EnterCriticalSection(&CriticalSection);
for(int i=0;i<_handle_vector.size();++i)
{
printf(" this object is currently used by thread %08x\n",_handle_vector[i]);
}
LeaveCriticalSection(&CriticalSection);
}
};
CMyObject mo;
CRITICAL_SECTION CriticalSection;
unsigned __stdcall ThreadFunc( void* arg )
{
unsigned int sleep_time = *(unsigned int*)arg;
while ( true)
{
Sleep(sleep_time);
mo.method1(CriticalSection);
}
_endthreadex( 0 );
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hThread;
unsigned int threadID;
if (!InitializeCriticalSectionAndSpinCount(&CriticalSection, 0x80000400) )
return -1;
for(int i=0;i<5;++i)
{
unsigned int sleep_time = 1000 *(i+1);
hThread = (HANDLE)_beginthreadex( NULL, 0, &ThreadFunc, &sleep_time, 0, &threadID );
printf("creating thread %08x\n",threadID);
}
WaitForSingleObject( hThread, INFINITE );
return 0;
}
EDIT1:
As mentioned in the comment, reference dispensing could be implemented as below. A vector could hold the unique thread ids referring to your object. You may also need to implement a custom assignment operator to deal with the object references being copied by a different thread.
class MyClass
{
public:
static MyClass & Create()
{
static MyClass * p = new MyClass();
return *p;
}
static void Destroy(MyClass * p)
{
delete p;
}
private:
MyClass(){}
~MyClass(){};
};
class MyCreatorClass
{
MyClass & _my_obj;
public:
MyCreatorClass():_my_obj(MyClass::Create())
{
}
MyClass & GetObject()
{
//TODO:
// use GetCurrentThreadId to get thread id
// check if the id is already in the vector
// add this to a vector
return _my_obj;
}
~MyCreatorClass()
{
MyClass::Destroy(&_my_obj);
}
};
int _tmain(int argc, _TCHAR* argv[])
{
MyCreatorClass mcc;
MyClass &o1 = mcc.GetObject();
MyClass &o2 = mcc.GetObject();
return 0;
}
The solution I'm familiar with is to state "if you don't use the correct API to interact with this object, then all bets are off."
You may be able to turn your requirements around and make it possible for any threads that reference the object subscribe to signals from the object. This won't help with race conditions, but allows threads to know when the object has unloaded itself (for instance).
To solve the problem "I have an object and want to know how many threads access it" and you also can enumerate your threads, you can solve this problem with thread local storage.
Allocate a TLS index for your object. Make a private method called "registerThread" which simply sets the thread TLS to point to your object.
The key extension to the poster's original idea is that during every method call, call this registerThread(). Then you don't need to detect when or who created the thread, it's just set (often redundantly) during every actual access.
To see which threads have accessed the object, just examine their TLS values.
Upside: simple and pretty efficient.
Downside: solves the posted question but doesn't extend smoothly to multiple objects or dynamic threads that aren't enumerable.