C++ objects and threads - c++

I am doing a simple server apllication, where you can have multiple connections, each connection is a single thread. This is the example I would like it to look like(it doesn't work), there is a collection of threads where each thread instantiates an object of the class connection:
class connection{};
class server{
std::vector<std::thread> active_connections;
public:
void listen() {active_connections.push_back(std::thread(connection));}
};
I have been searching for the solution, but the best I could find were some member function threads. The solution turned quite wrong when I tested it, for example:
class connection{};
class server{
std::vector<std::thread> active_connections;
public:
void new_connection() { ... }
void listen() {
active_connections.push_back(std::thread(&server::new_connection,this)); }
};
The message was: error: use of deleted function ‘std::thread::thread(const std::thread&).
Does that mean the std::thread class wants to copy the server class?? I don't know C++ that much so please don't flame, I'm only asking.
Thanks!
EDIT:
This is where this happens:
void server::do_listen()
{
int addr_size = sizeof(sockaddr_in);
sockaddr_in client_sock;
connection_info cn_info;
while(true)
{
int csock;
if((csock = accept(server_sock, (sockaddr*)&client_sock, (socklen_t*)&addr_size)) != -1)
{
printf("Incomming connection from %s.\n", inet_ntoa(client_sock.sin_addr));
memset(&cn_info,0, sizeof(connection_info));
cn_info.sock_addr = client_sock;
cn_info.sock = csock;
std::thread thr(&server::new_connection, *this, cn_info);
thr.join();
}
}
}
This is by far. The server::new_connection() is still empty.

The problem is here:
std::thread thr(&server::new_connection, *this, cn_info);
^
You're binding a copy of the server object; which isn't copyable because it contains (a container of) non-copyable thread objects. Instead, bind a pointer:
std::thread thr(&server::new_connection, this, cn_info);
Some might find a lambda more readable; this captures the this pointer and cn_info by value:
std::thread thr([=]{new_connection(cn_info);});
As a commentor mentions, you could obfuscate the solution by binding a reference wrapper:
std::thread thr(&server::new_connection, std::ref(*this), cn_info);
but I prefer to remove, rather than add, complexity where possible.

Does that mean the std::thread class wants to copy the server class?
No, that means a copy of a std::thread is made somewhere, but that's forbidden as std::thread in NonCopyable (note the thread(const thread&) = delete; in the list of constructors).
You should eliminate any code that performs a copy of a thread. The one you've posted doesn't perform such copy.
An example where a copy is made "behind the scene" would be a push_back in a vector of a thread variable, i.e.:
std::thread myThread;
myVector.push_back(myThread);
In your code:
active_connections.push_back(std::thread(&server::new_connection,this));
as you're pushing back a temporary, it doesn't get copied but moved into the vector.

More a design question to you author:
You create one thread for one connection. If you have ten connections you have ten threads. But what do you do if you have 10000 connections you want to handle? I'm not sure your operating system likes that.
A much better connection handling approach - only in my opinion - would be if you only have N threads that handle your network connections. If your computer has e.g. six cores you could spend one, two or more threads only for the network stuff. So you have a thread pool of a certain size for this.
The proactor pattern could be interesting for you:
http://en.wikipedia.org/wiki/Proactor_pattern
If you're looking for a library that could handle such things, you could look to the upcoming C++ Boost library named Boost.Asynchronous:
https://github.com/henry-ch/asynchronous
This library is still in development, but really good. I've made some simple client/server applications with the library.

Related

How to use an executor from a boost::asio object to dispatch stuff into the same execution thread?

Ok, I don't have enough code yet for a fully working program, but I'm already running into issues with "executors".
EDIT: this is Boost 1.74 -- Debian doesn't give me anything more current. Which causes problems elsewhere, but I hope it had working executors back then as well :-)
Following one of the beast examples, I'm assigning a "strand" to a number of objects (a resolver and a stream) to be slightly more future-proof in case I need to go a multithreaded-route. But that's not actually the problem here, just the reason why I used a strand.
Now, I have this object that has a number of asio "subobjects" which are all initialized with that executor. No issues there, at least on the compiler side (I don't know whether the code does the intended stuff yet... it's heavily based on a beast example, though, so it's not completely random).
So, I want to send data to that object now. The assumption here is that the entire executor stuff is kind of pointless if I just randomly manipulate stuff from the "outside", so I wanted to "dispatch" my changes to the executor to it plays nicely with the async stuff that might be going on, especially when/if threads come into play. And since all the asio objects know their executor, I figured I wouldn't need to remember it myself.
Here some random self-contained example that shows the problem I'm having.
#include <boost/asio.hpp>
/************************************************************************/
class Test
{
private:
boost::asio::ip::tcp::resolver resolver;
public:
Test(boost::asio::any_io_executor executor)
: resolver(executor)
{
}
public:
void doSomething()
{
std::function<void(void)> function;
// Doesn't compile: no "dispatch" member
resolver.get_executor().dispatch(function);
// Doesn't compile: "target" is a template, needs a type
resolver.get_executor().target()->dispatch(function);
// Compiles, but I don't like having to know that it's a strand?
// How can the asio objects use the executor without me telling them the type?
// NOTE: I don't know whether it does the right thing!
resolver.get_executor().target<boost::asio::io_context::strand>()->dispatch(function);
}
};
/************************************************************************/
void test()
{
boost::asio::io_context ioContext;
Test test(boost::asio::make_strand(ioContext));
}
It's actually all in the "doSomething()" function: how do I "dispatch" something to the same executor that some asio object uses, without having to know exactly what that executor is?
Yes, I can do the workaround and pass the "strand" object instead of any_executor, and store that with the other stuff so I have something to call directly. But since every asio object has an executor and also manages to use it properly... I should be able to do the same thing, no?
Post, defer and dispatch are free functions:
boost::asio::dispatch(resolver.get_executor(), function);
Live: http://coliru.stacked-crooked.com/a/c39d263a99fbe3fd
#include <boost/asio.hpp>
#include <iostream>
struct Test {
Test(boost::asio::any_io_executor executor) : resolver(executor) {}
void doSomething() {
boost::asio::dispatch(resolver.get_executor(), [] {std::cout << "Hello world\n";});
}
private:
boost::asio::ip::tcp::resolver resolver;
};
int main() {
boost::asio::io_context ioContext;
Test test(make_strand(ioContext));
test.doSomething();
ioContext.run();
}
Prints
Hello world

std::async analogue for specified thread

I need to work with several objects, where each operation may take a lot of time.
The processing could not be placed in a GUI (main) thread, where I start it.
I need to make all the communications with some objects on asynchronous operations, something similar to std::async with std::future or QtConcurrent::run() in my main framework (Qt 5), with QFuture, etc., but it doesn't provide thread selection. I need to work with a selected object (objects == devices) in only one additional thread always,
because:
I need to make a universal solution and don't want to make each class thread-safe
For example, even if make a thread-safe container for QSerialPort, Serial port in Qt cannot be accessed in more than one thread:
Note: The serial port is always opened with exclusive access (that is, no other process or thread can access an already opened serial port).
Usually a communication with a device consists of transmit a command and receive an answer. I want to process each Answer exactly in the place where Request was sent and don't want to use event-driven-only logic.
So, my question.
How can the function be implemented?
MyFuture<T> fut = myAsyncStart(func, &specificLiveThread);
It is necessary that one live thread can be passed many times.
Let me answer without referencing to Qt library since I don't know its threading API.
In C++11 standard library there is no straightforward way to reuse created thread. Thread executes single function and can be only joined or detachted. However, you can implement it with producer-consumer pattern. The consumer thread needs to execute tasks (represented as std::function objects for instance) which are placed in queue by producer thread. So if I am correct you need a single threaded thread pool.
I can recommend my C++14 implementation of thread pools as tasks queues. It isn't commonly used (yet!) but it is covered with unit tests and checked with thread sanitizer multiple times. The documentation is sparse but feel free to ask anything in github issues!
Library repository: https://github.com/Ravirael/concurrentpp
And your use case:
#include <task_queues.hpp>
int main() {
// The single threaded task queue object - creates one additional thread.
concurrent::n_threaded_fifo_task_queue queue(1);
// Add tasks to queue, task is executed in created thread.
std::future<int> future_result = queue.push_with_result([] { return 4; });
// Blocks until task is completed.
int result = future_result.get();
// Executes task on the same thread as before.
std::future<int> second_future_result = queue.push_with_result([] { return 4; });
}
If you want to follow the Active Object approach here is an example using templates:
The WorkPackage and it's interface are just for storing functions of different return type in a vector (see later in the ActiveObject::async member function):
class IWorkPackage {
public:
virtual void execute() = 0;
virtual ~IWorkPackage() {
}
};
template <typename R>
class WorkPackage : public IWorkPackage{
private:
std::packaged_task<R()> task;
public:
WorkPackage(std::packaged_task<R()> t) : task(std::move(t)) {
}
void execute() final {
task();
}
std::future<R> get_future() {
return task.get_future();
}
};
Here's the ActiveObject class which expects your devices as a template. Furthermore it has a vector to store the method requests of the device and a thread to execute those methods one after another. Finally the async function is used to request a method call from the device:
template <typename Device>
class ActiveObject {
private:
Device servant;
std::thread worker;
std::vector<std::unique_ptr<IWorkPackage>> work_queue;
std::atomic<bool> done;
std::mutex queue_mutex;
std::condition_variable cv;
void worker_thread() {
while(done.load() == false) {
std::unique_ptr<IWorkPackage> wp;
{
std::unique_lock<std::mutex> lck {queue_mutex};
cv.wait(lck, [this] {return !work_queue.empty() || done.load() == true;});
if(done.load() == true) continue;
wp = std::move(work_queue.back());
work_queue.pop_back();
}
if(wp) wp->execute();
}
}
public:
ActiveObject(): done(false) {
worker = std::thread {&ActiveObject::worker_thread, this};
}
~ActiveObject() {
{
std::unique_lock<std::mutex> lck{queue_mutex};
done.store(true);
}
cv.notify_one();
worker.join();
}
template<typename R, typename ...Args, typename ...Params>
std::future<R> async(R (Device::*function)(Params...), Args... args) {
std::unique_ptr<WorkPackage<R>> wp {new WorkPackage<R> {std::packaged_task<R()> { std::bind(function, &servant, args...) }}};
std::future<R> fut = wp->get_future();
{
std::unique_lock<std::mutex> lck{queue_mutex};
work_queue.push_back(std::move(wp));
}
cv.notify_one();
return fut;
}
// In case you want to call some functions directly on the device
Device* operator->() {
return &servant;
}
};
You can use it as follows:
ActiveObject<QSerialPort> ao_serial_port;
// direct call:
ao_serial_port->setReadBufferSize(size);
//async call:
std::future<void> buf_future = ao_serial_port.async(&QSerialPort::setReadBufferSize, size);
std::future<Parity> parity_future = ao_serial_port.async(&QSerialPort::parity);
// Maybe do some other work here
buf_future.get(); // wait until calculations are ready
Parity p = parity_future.get(); // blocks if result not ready yet, i.e. if method has not finished execution yet
EDIT to answer the question in the comments: The AO is mainly a concurrency pattern for multiple reader/writer. As always, its use depends on the situation. And so this pattern is commonly used in distributed systems/network applications, for example when multiple clients request a service from a server. The clients benefit from the AO pattern as they are not blocked, when waiting for the server to answer.
One reason why this pattern is not used so often in fields other then network apps might be the thread overhead. When creating a thread for every active object results in a lot of threads and thus thread contention if the number of CPUs is low and many active objects are used at once.
I can only guess why people think it is a strange issue: As you already found out it does require some additional programming. Maybe that's the reason but I'm not sure.
But I think the pattern is also very useful for other reasons and uses. As for your example, where the main thread (and also other background threads) require a service from singletons, for example some devices or hardware interfaces, which are only availabale in a low number, slow in their computations and require concurrent access, without being blocked waiting for a result.
It's Qt. It's signal-slot mechanism is thread-aware. On your secondary (non-GUI) thread, create a QObject-derived class with an execute slot. Signals connected to this slot will marshal the event to that thread.
Note that this QObject can't be a child of a GUI object, since children need to live in their parents thread, and this object explicitly does not live in the GUI thread.
You can handle the result using existing std::promise logic, just like std::future does.

simple thread safe vector for connections in grpc service

I'm trying to learn about concurrency, and I'm implementing a small connection pool in a grpc service that needs to make many connections to a postgres database.
I'm trying to implement a basic connectionPool to prevent creating a new connection for each request. To start, I attempted to create a thread safe std::vector. When I run the grpc server, a single transaction is made, and then the server blocks, but I can't reason out what's going on. Any help would be appreciated
class SafeVector {
std::vector<pqxx::connection*> pool_;
int size_;
int max_size_;
std::mutex m_;
std::condition_variable cv_;
public:
SafeVector(int size, int max_size) : size_(size), max_size_(max_size) {
assert(size_ <= max_size_);
for (size_t i = 0; i < size; ++i) {
pool_.push_back(new pqxx::connection("some conn string"));
}
}
SafeVector(SafeVector const&)=delete; // to be implemented
SafeVector& operator=(SafeVector const&)=delete; // no assignment keeps things simple
std::shared_ptr<pqxx::connection> borrow() {
std::unique_lock<std::mutex> l(m_);
cv_.wait(l, [this]{ return !pool_.empty(); });
std::shared_ptr<pqxx::connection> res(pool_.back());
pool_.pop_back();
return res;
}
void surrender(std::shared_ptr<pqxx::connection> connection) {
std::lock_guard<std::mutex> l(m_);
pool_.push_back(connection.get());
cv_.notify_all();
}
};
In main, I then pass a SafeVector* s = new SafeVector(4, 10); into my service ServiceA(s)
Inside ServiceA, I use the connection as follows:
std::shared_ptr<pqxx::connection> c = safeVector_->borrow();
c->perform(SomeTransactorImpl);
safeVector_->surrender(c);
I put a bunch of logging statements everywhere, and I'm pretty sure I have a fundamental misunderstanding of the core concept of either (1) shared_ptr or (2) the various locking structures.
In particular, it seems that after 4 connections are used (the maximum number of hardware threads on my machine), a seg fault (error 11) happens when attempting to return a connection in the borrow() method.
Any help would be appreciated. Thanks.
smart pointers in C++ are about object ownership.
Object ownership is about who gets to delete the object and when.
A shared pointer means that who gets to delete and when is a shared concern. Once you have said "no one bit of code is permitted to delete this object", you cannot take it back.
In your code, you try to take an object with shared ownership and claim it for your SafeVector in surrender. This is not permitted. You try it anyhow with a call to .get(), but the right to delete that object remains owned by shared pointers.
They proceed to delete it (maybe right away, maybe tomorrow) and your container has a dangling pointer to a deleted object.
Change your shared ptrs to unique ptrs. Add move as required to make it compile.
In surrender, assert the supplied unique ptr is non-empty.
And whike you are in there,
cv_.notify_one();
I would also
std::vector<std::unique_ptr<pqxx::connection>> pool_;
and change:
pool_.push_back(std::move(connection));
if you don't update the type of pool_, instead change .get() to .release(). Unlike shared ptr, unique ptr can give up ownership.

How to use a single object in multiple threads?

I want to use a single object in multiple threads using c++. I know from java that threads share all variables, but it seems that in c++ it is different.
I have the following structure to store the date
Class Flow: has multiple integers
Class UE: has a list<Flow*>
Class FlowTable: has a map<int,UE*>
Now have two threads(Objects: InOutReader and OutInReader), each of them has a FlowTable* and shall read and/or insert data to the FlowTable.
in the main() of my starter I call new FlowTable(), create the threaded objects and give the FlowTable* to them using a setter. But in the end it looks like that the two threads work with different FlowTable objects.
class InOutReader{
public:
start(){
while(true){
//read data from somewhere(tap-interface1)
//extract address from ip packet and tcp/udp header etc
Flow* myflow = new Flow(IPsrc,IPdest);
this->myflowTable->insertFlow(myflow);
}
}
}
class OutInReader{
public:
start(){
while(true){
//read data from somewhere(tap-interface1)
//extract address from ip packet and tcp/udp header etc
Flow* myflow = new Flow(IPsrc,IPdest);
this->myflowTable->existsFlow(myflow);// should return true if a flow with the same data was inserted before
}
}
}
main programm
FlowTable* myflowTable;
startThreadOne(){
InOutReader ior = InOutReader();
ior.setFlowTable(myFlowTable);
ior.start();
}
startThreadtwo(){
InOutReader oir = InOutReader();
oir.setFlowTable(myFlowTable);
oir.start();
}
void main(){
myFlowTable = new FlowTable();
std::thread t1 = std::thread(startThreadOne);
std::thread t2 = std::thread(startThreadtwo);
t1.join();
t2.join();
}
What I have to do to use the same FlowTable Object in multiple threads?
I can't make heads or tails of your explication, but if you want to have the two threads sharing the same dynamically allocated FlowTable, the solution in C++ is incredibly simple:
int
main()
{
FlowTable* sharedFlowTable = new FlowTable();
std::thread t1( startThread1, sharedFlowTable );
std::thread t2( startThread2, sharedFlowTable );
// ...
}
Then declare startThread1 and startThread2 to take a FlowTable* as argument. (This is a lot simpler than in Java; in Java, where you'd have to define one class per thread, deriving from Runnable, and give each class a constructor which took a FlowTable, and copied it to a member variable so that the run function could find it.)
EDIT:
Of course, if the value pointed to by sharedFlowTable really is a FlowTable, and no inheritance and factory functions are involved, you could just make it a local variable in main, rather than a pointer, and pass &sharedFlowTable to the threads. This would be even simpler, and more idiomatic in C++. (And I have to thank #5gon12eder for pointing this out. Embarrassingly, because I'm usually the one arguing against dynamic allocation unless it is necessary.)

Detecting when an object is passed to a new thread in C++?

I have an object for which I'd like to track the number of threads that reference it. In general, when any method on the object is called I can check a thread local boolean value to determine whether the count has been updated for the current thread. But this doesn't help me if the user say, uses boost::bind to bind my object to a boost::function and uses that to start a boost::thread. The new thread will have a reference to my object, and may hold on to it for an indefinite period of time before calling any of its methods, thus leading to a stale count. I could write my own wrapper around boost::thread to handle this, but that doesn't help if the user boost::bind's an object that contains my object (I can't specialize based on the presence of a member type -- at least I don't know of any way to do that) and uses that to start a boost::thread.
Is there any way to do this? The only means I can think of requires too much work from users -- I provide a wrapper around boost::thread that calls a special hook method on the object being passed in provided it exists, and users add the special hook method to any class that contains my object.
Edit: For the sake of this question we can assume I control the means to make new threads. So I can wrap boost::thread for example and expect that users will use my wrapped version, and not have to worry about users simultaneously using pthreads, etc.
Edit2: One can also assume that I have some means of thread local storage available, through __thread or boost::thread_specific_ptr. It's not in the current standard, but hopefully will be soon.
In general, this is hard. The question of "who has a reference to me?" is not generally solvable in C++. It may be worth looking at the bigger picture of the specific problem(s) you are trying to solve, and seeing if there is a better way.
There are a few things I can come up with that can get you partway there, but none of them are quite what you want.
You can establish the concept of "the owning thread" for an object, and REJECT operations from any other thread, a la Qt GUI elements. (Note that trying to do things thread-safely from threads other than the owner won't actually give you thread-safety, since if the owner isn't checked it can collide with other threads.) This at least gives your users fail-fast behavior.
You can encourage reference counting by having the user-visible objects being lightweight references to the implementation object itself [and by documenting this!]. But determined users can work around this.
And you can combine these two-- i.e. you can have the notion of thread ownership for each reference, and then have the object become aware of who owns the references. This could be very powerful, but not really idiot-proof.
You can start restricting what users can and cannot do with the object, but I don't think covering more than the obvious sources of unintentional error is worthwhile. Should you be declaring operator& private, so people can't take pointers to your objects? Should you be preventing people from dynamically allocating your object? It depends on your users to some degree, but keep in mind you can't prevent references to objects, so eventually playing whack-a-mole will drive you insane.
So, back to my original suggestion: re-analyze the big picture if possible.
Short of a pimpl style implementation that does a threadid check before every dereference I don't see how you could do this:
class MyClass;
class MyClassImpl {
friend class MyClass;
threadid_t owning_thread;
public:
void doSomethingThreadSafe();
void doSomethingNoSafetyCheck();
};
class MyClass {
MyClassImpl* impl;
public:
void doSomethine() {
if (__threadid() != impl->owning_thread) {
impl->doSomethingThreadSafe();
} else {
impl->doSomethingNoSafetyCheck();
}
}
};
Note: I know the OP wants to list threads with active pointers, I don't think that's feasible. The above implementation at least lets the object know when there might be contention. When to change the owning_thread depends heavily on what doSomething does.
Usually you cannot do this programmatically.
Unfortuately, the way to go is to design your program in such a way that you can prove (i.e. convince yourself) that certain objects are shared, and others are thread private.
The current C++ standard does not even have the notion of a thread, so there is no standard portable notion of thread local storage, in particular.
If I understood your problem correctly I believe this could be done in Windows using Win32 function GetCurrentThreadId().
Below is a quick and dirty example of how it could be used. Thread synchronisation should rather be done with a lock object.
If you create an object of CMyThreadTracker at the top of every member function of your object to be tracked for threads, the _handle_vector should contain the thread ids that use your object.
#include <process.h>
#include <windows.h>
#include <vector>
#include <algorithm>
#include <functional>
using namespace std;
class CMyThreadTracker
{
vector<DWORD> & _handle_vector;
DWORD _h;
CRITICAL_SECTION &_CriticalSection;
public:
CMyThreadTracker(vector<DWORD> & handle_vector,CRITICAL_SECTION &crit):_handle_vector(handle_vector),_CriticalSection(crit)
{
EnterCriticalSection(&_CriticalSection);
_h = GetCurrentThreadId();
_handle_vector.push_back(_h);
printf("thread id %08x\n",_h);
LeaveCriticalSection(&_CriticalSection);
}
~CMyThreadTracker()
{
EnterCriticalSection(&_CriticalSection);
vector<DWORD>::iterator ee = remove_if(_handle_vector.begin(),_handle_vector.end(),bind2nd(equal_to<DWORD>(), _h));
_handle_vector.erase(ee,_handle_vector.end());
LeaveCriticalSection(&_CriticalSection);
}
};
class CMyObject
{
vector<DWORD> _handle_vector;
public:
void method1(CRITICAL_SECTION & CriticalSection)
{
CMyThreadTracker tt(_handle_vector,CriticalSection);
printf("method 1\n");
EnterCriticalSection(&CriticalSection);
for(int i=0;i<_handle_vector.size();++i)
{
printf(" this object is currently used by thread %08x\n",_handle_vector[i]);
}
LeaveCriticalSection(&CriticalSection);
}
};
CMyObject mo;
CRITICAL_SECTION CriticalSection;
unsigned __stdcall ThreadFunc( void* arg )
{
unsigned int sleep_time = *(unsigned int*)arg;
while ( true)
{
Sleep(sleep_time);
mo.method1(CriticalSection);
}
_endthreadex( 0 );
return 0;
}
int _tmain(int argc, _TCHAR* argv[])
{
HANDLE hThread;
unsigned int threadID;
if (!InitializeCriticalSectionAndSpinCount(&CriticalSection, 0x80000400) )
return -1;
for(int i=0;i<5;++i)
{
unsigned int sleep_time = 1000 *(i+1);
hThread = (HANDLE)_beginthreadex( NULL, 0, &ThreadFunc, &sleep_time, 0, &threadID );
printf("creating thread %08x\n",threadID);
}
WaitForSingleObject( hThread, INFINITE );
return 0;
}
EDIT1:
As mentioned in the comment, reference dispensing could be implemented as below. A vector could hold the unique thread ids referring to your object. You may also need to implement a custom assignment operator to deal with the object references being copied by a different thread.
class MyClass
{
public:
static MyClass & Create()
{
static MyClass * p = new MyClass();
return *p;
}
static void Destroy(MyClass * p)
{
delete p;
}
private:
MyClass(){}
~MyClass(){};
};
class MyCreatorClass
{
MyClass & _my_obj;
public:
MyCreatorClass():_my_obj(MyClass::Create())
{
}
MyClass & GetObject()
{
//TODO:
// use GetCurrentThreadId to get thread id
// check if the id is already in the vector
// add this to a vector
return _my_obj;
}
~MyCreatorClass()
{
MyClass::Destroy(&_my_obj);
}
};
int _tmain(int argc, _TCHAR* argv[])
{
MyCreatorClass mcc;
MyClass &o1 = mcc.GetObject();
MyClass &o2 = mcc.GetObject();
return 0;
}
The solution I'm familiar with is to state "if you don't use the correct API to interact with this object, then all bets are off."
You may be able to turn your requirements around and make it possible for any threads that reference the object subscribe to signals from the object. This won't help with race conditions, but allows threads to know when the object has unloaded itself (for instance).
To solve the problem "I have an object and want to know how many threads access it" and you also can enumerate your threads, you can solve this problem with thread local storage.
Allocate a TLS index for your object. Make a private method called "registerThread" which simply sets the thread TLS to point to your object.
The key extension to the poster's original idea is that during every method call, call this registerThread(). Then you don't need to detect when or who created the thread, it's just set (often redundantly) during every actual access.
To see which threads have accessed the object, just examine their TLS values.
Upside: simple and pretty efficient.
Downside: solves the posted question but doesn't extend smoothly to multiple objects or dynamic threads that aren't enumerable.