Is there an Interlocked for this? C++ - c++

Please note - these builds are for VS2008/VS2010 builds I cannot use any 11 constructs.
Imagine I have subscribers listening to some publisher. My publisher has a container of subscriber pointers. In my void detach(ISubscriber *), instead of locking the subscriber list, I will "NULL" out the pointer, for lack of a better word, for that subscriber.
//My container in the publisher. Inserts to not invalidate, removals only invalidate iterators pointing to the removed element, for this reason we NULL
Container<ISubscriber *> myContainer;
Now in the publisher...
void NotifySubscribers(){
foreach(subscriber in container){
if(subscriber)//This is my problem
subscriber->notify()
}
}
Line 3 - pointer is tested and is pointing to valid object.
Before line 4 is executed, another thread NULLs the subscriber.
Line 4 - Boom.
My Question, is there a way that I can use some sort of Interlocked something such that the test and call is atomic.
e.g. for a reference counted object in the destructor, something like this works
RefCountObject::~RefCountObject(){
if(InterlockedDecrement(&m_count) == 0)
delete m_data;
}
Here, the reference counter is decremented and tested against zero automically, then and only then if equal to zero, the data is released.
Is there a way for me to do this for calling a function based on the validity of a pointer?
Edit 1: I need to clarify a little based on the comments and thank you for your replies. The publisher is not responsible for the "releasing of memory" of the Subscribers, so there will be no leak. After the notify, the publisher will go through a loop that cleans up the container by removing nulled out subscribers.
Now as for the subscribers themselves. When they detach, they are just detaching from listening to the publisher. They themselves will live on in static objects (This is the contract we are requiring). Why? Because we cannot afford to hold a lock during notification. The only other option was to use Share_Ptr, which was decided not to be incorporated into this DLL, due to versioning in the future.
I created a hand written shared_ptr, but then it occurred to me that any reference to an object that was not wrapped in a resource management class would fall into the same pitfall and just push the "requirement" that subscribers would have to make sure to not refer to any dangling references within their implementation of said subscriber.
Which brings us back to just saying, subscribers cannot be "released", and currently all the clients that will use this are static objects. We were just looking towards the future. Some of the users are legacy apps and would not be easy to bring in enabled_shared_from_this etc.

is there a way that I can use some sort of Interlocked something such that the test and call is atomic.
For the test, yes there will be a way. You just want to compare a pointer.
To do the call, i doubt it. You will need a guard around the call, i.e. a Critical Section.

You can use a "smart pointer" strategy to do a deferred nulling of the pointer. As long as someone has a reference to the pointer, as determined by an interlocked reference count, keep the pointer valid; when the count goes to zero it's safe to null.

Related

why we need both std::promise and std::future?

I am wondering why we need both std::promise and std::future ? why c++11 standard divided get and set_value into two separate classes std::future and std::promise?
In the answer of this post, it mentioned that :
The reason it is separated into these two separate "interfaces" is to
hide the "write/set" functionality from the "consumer/reader".
I don't understand the benefit of hiding here. But isn't it simpler if we have only one class "future"? For example: promise.set_value can be replaced by future.set_value.
The problem that promise/future exist to solve is to shepherd a value from one thread to another. It may also transfer an exception instead.
So the source thread must have some object that it can talk to, in order to send the desired value to the other thread. Alright... who owns that object? If the source has a pointer to something that the destination thread owns, how does the source know if the destination thread has deleted the object? Maybe the destination thread no longer cares about the value; maybe something changed such that it decided to just drop your thread on the floor and forget about it.
That's entirely legitimate code in some cases.
So now the question becomes why the source doesn't own the promise and simply give the destination a pointer/reference to it? Well, there's a good reason for that: the promise is owned by the source thread. Once the source thread terminates, the promise will be destroyed. Thus leaving the destination thread with a reference to a destroyed promise.
Oops.
Therefore, the only viable solution is to have two full-fledged objects: one for the source and one for the destination. These objects share ownership of the value that gets transferred. Of course, that doesn't mean that they couldn't be the same type; you could have something like shared_ptr<promise> or somesuch. After all, promise/future must have some shared storage of some sort internally, correct?
However, consider the interface of promise/future as they currently stand.
promise is non-copyable. You can move it, but you can't copy it. future is also non-copyable, but a future can become a shared_future that is copyable. So you can have multiple destinations, but only one source.
promise can only set the value; it can't even get it back. future can only get the value; it cannot set it. Therefore, you have an asymmetric interface, which is entirely appropriate to this use case. You don't want the destination to be able to set the value and the source to be able to retrieve it. That's backwards code logic.
So that's why you want two objects. You have an asymmetric interface, and that's best handled with two related but separate types and objects.
I would think of a promise/future as an asynchronous queue (that's only intended to hold a single value).
The future is the read end of the queue. The promise is the write end of the queue.
The use of the two is normally distinct: the producer normally just writes to the "queue", and the consume just reads from it. Although, as you've noted, it's possible for a producer to read the value, there's rarely much reason for it to do that, so optimizing that particular operation is rarely seen as much of a priority.
In the usual scheme of things, the producer produces the value, and puts it into the promise. The consumer gets the value from the future. Each "client" uses one simple interface dedicated exclusively to one simple task, so it's easier to design and document the code, as well as ensuring that (for example) the consumer code doesn't mess with something related to producing the value (or vice versa). Yes, it's possible to do that, but enough extra work that it's fairly unlikely to happen by accident.

Pointer to STL Container Thread Safety (Queue/Deque)

I currently have a bit of a multi-threading conundrum. I have two threads, one that reads serial data, and another that attempts to extracts packets from the data. The two threads share a queue. The thread that attempts to create packets has a function entitled parse with the following declaration:
Parse(std::queue<uint8_t>* data, pthread_mutex_t* lock);
Essentially it takes a pointer to the STL queue and uses pop() as it goes through the queue looking for a packet. The lock is used since any pop() is locked and this lock is shared between the Parse function and the thread that is pushing data onto the queue. This way, the queue can be parsed while data is being actively added to it.
The code seems to work for the most part, but I'm seeing invalid packets at a somewhat higher rate than I'd expect. My main question is I'm wondering if the pointer is changing while I'm reading data out of the queue. For example, if the first thread pushes a bunch of data, is there a chance that where the queue is found in memory can change? Or am I guaranteed that the pointer to the queue will remain constant, even as data is added? My concern is that the memory for the queue can be reallocated during my Parse() function, and therefore in the middle of my function, the pointer is invalidated.
For example, I understand that certain STL iterators are invalidated for certain operations. However, I am passing a pointer to the container itself. That is, something like this:
// somewhere in my code I create a queue
std::queue<uint8_t> queue;
// elsewhere...
Parse(&queue, &lock_shared_between_the_two_threads);
Does the pointer to the container itself ever get invalidated? And what does it point to? The first element, or ...?
Note that I'm not pointing to any given element, but to the container itself. Also, I never specified which underlying container should be used to implement the queue, so underneath it all, it's just a deque.
Any help will be greatly appreciated.
EDIT 8/1:
I was able to run a few tests on my code. A couple of points:
The pointer for the container itself does not change over the lifecycle of my program. This makes sense since the queue itself is a member variable of a class. That is, while the queue's elements are dynamically allocated, it does not appear to be the case for the queue itself.
The bad packets I was experiencing appear to be a function of the serial data I'm receiving. I dumped all the data to a hex file and was able to find packets that were invalid, and my alogrithm was correctly marking them as such.
As a result, I'm thinking that passing a reference or pointer to an STL container into a function is thread safe, but I'd like to hear some more commentary ensuring that this is the case, or if this is implementation specific (as alot of STL is...).
You are worried that modifying a container (adding/deleting nodes) in one thread will somehow invalidate the pointer to the container in another thread. The container is an abstraction and will remain valid unless you delete the container object itself. The memory for the data maintained by the containers are typically allocated on the heap by stl::allocators.
This is quite different from the memory allocated for the container object itself which can be on the stack, heap etc., based on how the container object itself was created. This separation of the container from the allocator is what's preventing some modification to the data from modifying the container object itself.
To make debugging your problem simpler, like Jonathan Reinhart suggests, make it a single threaded system, that reads the stream AND parses it.
On a side note, have you considered using Boost Lookfree Queues or something similar. They are designed exactly for this type of scenarios. If you were receiving packets/reading them frequently, locking the queue for reading/writing for each packet can become a significant performance overhead.

Data sharing via stack between publisher and consumer thread

I have a publisher thread and a consumer thread. They share data via a std::stack<Data *>. The publisher simply push() the pointer and consumer simply pop() the pointer, use it and call delete on it. Since there is only single thread publishing pointers one at a time, and one thread consuming pointers, is there any need to synchronize the stack? Keep in mind that stack is only storing pointers. Publisher pushes pointer only when Data() is fully constructed.
Failure to synchronize on non-const methods of containers in std namespace is undefined behavior.
Neither push nor pop is const on the underlying container of a stack. So two threads are both writing to the state of the underlying container of the stack.
A way to think about it is that both are, at the very least, going to have to fight over the count of the number of elements in the stack: one is trying to increase it, the other is trying to decrease it. (There are other problems, but that one should convince you that both are writing to the state of the stack)
The std::stack<Data*> instance will need to have access synchronized as more than one thread can be modifying it (via pop() and push()) but the elements contained in it do not as only a single thread can be operating on an element at any one time.
Yes, there is a need to synchronize access to the stack, because std::stack class does not guarantee that any operation is atomic and it is possible, that push(), top() and pop() will interleave.

Need some ideas to destroy an N:M relation

I have a log system that collects messages in different queues. Also, the system accepts listeners (references to listeners) that system calls to write messages (listeners = consumers). Also, note that the whole log system is a singleton.
My problem is into destructor. You could send messages to different queues, and you can suscribe a listener to listen more than one queue, so every message queue could have a list of listeners. When destructor is called, if a listener is added to two or more queues, destructor try to delete the same listener 2 times (or more).
A dirty solution is to do not delete the listeners (there are few and little and is a singleton, so the leak is little, but I don't like).
Another solution is to maintain another structure to hold all listeners, and delete pointers from this structure instead from the queues. But nothing grants me that two different pointers points to the same listener and the problem would be the same.
I think I need a different solution. Some ideas?
Thanks!!!!
Why don't you just use shared_pointers? They come along with the Boost library (I don't know if they were included in the lastest C++ standard) and it looks that they are exactly what you need.
The shared_ptr class template stores a pointer to a dynamically allocated object, typically with a C++ new-expression. The object pointed to is guaranteed to be deleted when the last shared_ptr pointing to it is destroyed or reset.
See http://www.boost.org/doc/libs/1_48_0/libs/smart_ptr/shared_ptr.htm for more informations.

Cleaning up threads referencing an object when deleting the object (in C++)

I have an object (Client * client) which starts multiple threads to handle various tasks (such as processing incoming data). The threads are started like this:
// Start the thread that will process incoming messages and stuff them into the appropriate queues.
mReceiveMessageThread = CreateThread(NULL, 0, (LPTHREAD_START_ROUTINE)receiveRtpMessageFunction, this, 0, 0);
These threads all have references back to the initial object, like so:
// Thread initialization function for receiving RTP messages from a newly connected client.
static int WINAPI receiveRtpMessageFunction(LPVOID lpClient)
{
LOG_METHOD("receiveRtpMessageFunction");
Client * client = (Client *)lpClient;
while(client ->isConnected())
{
if(client ->receiveMessage() == ERROR)
{
Log::log("receiveRtpMessageFunction Failed to receive message");
}
}
return SUCCESS;
}
Periodically, the Client object gets deleted (for various good and sufficient reasons). But when that happens, the processing threads that still have references to the (now deleted) object throw exceptions of one sort or another when trying to access member functions on that object.
So I'm sure that there's a standard way to handle this situation, but I haven't been able to figure out a clean approach. I don't want to just terminate the thread, as that doesn't allow for cleaning up resources. I can't set a property on the object, as it's precisely properties on the object that become inaccessible.
Thoughts on the best way to handle this?
I would solve this problem by introducing a reference count to your object. The worker thread would hold a reference and so would the creator of the object. Instead of using delete, you decrement from the reference count and whoever drops the last reference is the one that actually calls delete.
You can use existing reference counting mechanisms (shared_ptr etc.), or you can roll your own with the Win32 APIs InterlockedIncrement() and InterlockedDecrement() or similar (maybe the reference count is a volatile DWORD starting out at 1...).
The only other thing that's missing is that when the main thread releases its reference, it should signal to the worker thread to drop its own reference. One way you can do this is by an event; you can rewrite the worker thread's loop as calls to WaitForMultipleObjects(), and when a certain event is signalled, you take that to mean that the worker thread should clean up and drop the reference.
You don't have much leeway because of the running threads.
No combination of shared_ptr + weak_ptr may save you... you may call a method on the object when it's valid and then order its destruction (using only shared_ptr would).
The only thing I can imagine is to first terminate the various processes and then destroy the object. This way you ensure that each process terminate gracefully, cleaning up its own mess if necessary (and it might need the object to do that).
This means that you cannot delete the object out of hand, since you must first resynchronize with those who use it, and that you need some event handling for the synchronization part (since you basically want to tell the threads to stop, and not wait indefinitely for them).
I leave the synchronization part to you, there are many alternatives (events, flags, etc...) and we don't have enough data.
You can deal with the actual cleanup from either the destructor itself or by overloading the various delete operations, whichever suits you.
You'll need to have some other state object the threads can check to verify that the "client" is still valid.
One option is to encapsulate your client reference inside some other object that remains persistent, and provide a reference to that object from your threads.
You could use the observer pattern with proxy objects for the client in the threads. The proxies act like smart pointers, forwarding access to the real client. When you create them, they register themselves with the client, so that it can invalidate them from its destructor. Once they're invalidated, they stop forwarding and just return errors.
This could be handled by passing a (boost) weak pointer to the threads.