What is the real benefit of Immutable Objects - concurrency

I always hear people saying that it's easier to manage immutable objects when working with multiple threads since when one thread accessed an immutable object, it doesn't have to worry that another thread is changing it.
Well, what happens if I have an immutable list of all employees in a company and a new employee is hired? In this case the immutable list has to be duplicated and the new copy of it has to include another employee object. Then the reference to list of employees should be directed to the new list.
When this scenario happens, the list itself doesn't change, but the reference to this list changes, and therefore the code "sees" different data.
If so, I don't understand why immutable objects makes our lives easier when working with multi-threads. What am I missing?

The main problem concurrent updates of mutable data, is that threads may perceive variable values stemming from different versions, i.e. a mixture of old and new values when speaking of a single update, forming an inconsistent state, violating the invariants of these variables.
See, for example, Java’s ArrayList. It has an int field holding the current size and a reference to an array whose elements are references to the contained objects. The values of these variables have to fulfill certain invariants, e.g. if the size is non-zero, the array reference is never null and the array length is always greater or equal to the size. When seeing values of different updates for these variables, these invariants do not hold anymore, so threads may see a list contents which never existed in this form or fail with spurious exceptions, reporting an illegal state that should be impossible (like NullPointerException or ArrayIndexOutOfBoundeException).
Note that thread safe or concurrent data structures only solve the problem regarding the internals of the data structure, so operations do not fail with spurious exceptions anymore (regarding the collection’s state, we’ve not talked about the contained element’s state yet), but operations iterating over these collections or looking at more than one contained element in any form, are still subject to possibly observing an inconsistent state regarding the contained elements. This also applies to the check-then-act anti-pattern, where an application first checks for a condition (e.g. using contains), before acting upon it (like fetching, adding or removing an element), whereas the condition might change in-between.
In contrast, a thread working on an immutable data structure may work on an outdated version of it, but all variables belonging to that structure are consistent to each other, reflecting the same version. When performing an update, you don’t need to think about exclusion of other threads, it’s simply not necessary, as the new data structures are not seen by other threads. The entire task of publishing a new version reduces to the task of publishing the root reference to the new version of your data structure. If you can’t stop the other threads processing the old version, the worst thing that can happen, is that you may have to repeat the operation using the new data afterwards, in other words, just a performance issue, in the worst case.
This works smooth with programming languages with garbage collection, as these allow it to let the new data structure refer to old objects, just replacing the changed objects (and their parents), without needing to worry about which objects are still in use and which not.

Here is an example: (a) we have a immutable list, (b) we have writer thread that adds elements to the list and (c) 1000 read threads that read the list without changing it.
It will work without locks.
If we have more than one writer thread we will still need a write lock. If we have to remove entries from the list we will need read-write lock.
Is it valuable? Do not know.

Related

Is Immutability more useful for concurrency?

It's usually said that immutable data structures are more "friendly" for concurrency programming. The explanation is that if a data structure is mutable and one thread modifies it, then another thread will "see" the previous mode of the data structure.
Although it's impossible to modify immutable data structure, if one needs to change it, it's possible to create a new data structure and give it the reference of the "old" data structure.
In my opinion, this situation is also not thread safe because one thread can access the old data structure and the second one can access the new data structure. If so, why is immutable data structure considered more thread-safe?
The idea here is that you can´t change an object once it has been created. Every object that is part of some structure is itself immutable. When you create new structure, and reuse some components of old structure, you still can´t change any internal value of any component that makes up this new structure. You can easily identify each structure by its root component reference.
Of course, you still need to make sure you swap them in thread safe fashion, this is usually done using variants of CAS (compare and swap) instructions. Or you can use functional programing, which idiom of functions without side-effects (take immutable input and produce new result) is ideal for thread safe, multi threaded programming.
There are many benefits to immutability over mutability, but that does not mean immutable is always better. Every approach has its benefits and applications. Take a look at this answer for more details on immutability uses. Also check this nicely written answer about mutability benefits in some circumstances.

how do atoms differ from refs?

How do atoms and refs actual differ?
I understand that atoms are declared differently and are updated via the swap! function, whereas refs use alter inside a dosync. The internal implementation, however, seems quite similar, which makes me wonder why I would use one and not the other.
For example, the doc page for atoms (http://clojure.org/atoms) states:
Internally, swap! reads the current value, applies the function to it, and attempts to compare-and-set it in. Since another thread may have changed the value in the intervening time, it may have to retry, and does so in a spin loop. The net effect is that the value will always be the result of the application of the supplied function to a current value, atomically. However, because the function might be called multiple times, it must be free of side effects.
The method described sounds quite similar to me to the STM used for refs.
The difference is that you can't coordinate changes between multiple atoms but you can coordinate changes between multiple refs.
Ref changes have to take place inside of a dosync block. All of the changes in the dosync take place or none of them do (atomic) but that extends to all changes to the refs within that dosync. This behaves a lot like a database transaction.
Let's say for example that you wanted to remove an item from one collection and add it to another but without anyone seeing a case where neither collection has the item. That's impossible to guarantee with atoms but you can guarantee it with refs.
Keep in mind that:
Use Refs for synchronous, coordinated and shared changes.
Use Atoms for synchronous, independent and shared changes.
To me, I don't care about the implementation differences between the atoms and refs. What I care about is the "Use Cases" of each on of them.
I use refs with I need to change the state of more than one reference type and I need the ATM semantics. I use atoms when I change the state of one reference type (object, depends on how you see it).
For example, if I need to increase the number of page hits in a web analytics system; I use atoms. If I need to transfer money between two accounts, I use refs.

Memory Management Design

I am having some issues designing the memory management for an Entity-Component system and am having some issues coming up with the detail of the design. Here is what I am trying to do (note that all of these classes except Entity are actually virtual, so will have many different specific implementations):
The Program class will have a container of Entity's. The Program will loop through the Entity's and call update on each of them. It will also have a few SubSystem's, which it will also update on each loop through.
Each Entity will contain two types of Component's. All of them will be owned by a unique_ptr inside the Entity since their lifetime is directly tied to the entity. One type, UpdateableComponent, will be updated when the Entity.update() method is called. The second type SubSystemComponent will be updated from within their respective SubSystem.
Now here are my two problems. The first is that some of the Component's will control the lifetime of their parent Entity. My current idea for this is that Component will be able to call a function parent.die() which would change an internal flag inside Entity. Then after Program finishes looping through its updates, it loops through a second time and removes each Entity which was marked for deletion during the last update. I don't know if this is an efficient or smart way to go about it, although it should avoid the problem of an Entity dieing while its Component's are still updating.
The second issue is that I am not sure how to reference SubSystemComponent's from within SubSystem. Since they are refered to by a unique_ptr from inside Entity, I can't use a shared_ptr or a weak_ptr, and a standard pointer would end up dangling when the Entity owning a component dies. I could switch to a shared_ptr inside the Entity for these, then use a weak_ptr in the SubSystem's, however I would prefer to not do this because the whole point is that Entity completely owns its Component's.
So 2 things:
Can my first idea be improved upon in a meaningful way?
Is there an easy way to implement a weak_ptr sort of functionality with unique_ptr, or should I just switch to shared_ptr and just make sure to not create more than one shared_ptr to the SubSystemComponent's
Can my first idea be improved upon in a meaningful way?
Hard to say without knowing more about the nature of the work being undertaken. For example, you haven't said anything about your use of threads, but it seems your design gives equal priority to all the possible updates by cycling through things in a set sequence. For some things where low latency is important, or there's some useful prioritorisation that would ideally be done, a looping sequence like that isn't good, while other times it's ideal.
There are other ways to coordinate the Component-driven removal of Entities from the Program:
return codes could bubble up to the loop over entities, triggering an erase from the container of Entities,
an Observer pattern or lambda/std::function could allow the Program to specify cleanup behaviour.
Is there an easy way to implement a weak_ptr sort of functionality with unique_ptr,
No.
or should I just switch to shared_ptr and just make sure to not create more than one shared_ptr to the SubSystemComponent's
It sounds like a reasonable fit. You could even wrap a shared_ptr in a non-copyable class to avoid accidental mistakes.
Alternatively - as for Entity destruction above - you could coordinate the linkage between SubSystem and SubSystemComponent using events, so the SubSystemComponent destructor calls back to the SubSystem. An Observer pattern is one way to do this, a SubSystemComponent-side std::function fed a lambda is even more flexible. Either way, the Subsystem removes the SubSystemComponent from its records.

Handling one object in some containers

I want to store pointers to one instance of an object in some (two or more) containers. I've met one problem in this idea: how I can handle removing of this object. Objects have rather stormy life (I am talking about game, but I think this situation is not so specific) and can be removed rather often. To my mind this problem is divided into two problems
1.
How should I signal to containers about deletion? In C# I used to create boolean property IsDead in stored objects, so each iteration of the main loop at first finds 'dead' objects and removes them. No circular reference and everything is rather clear :-) Is this technique correct?
2.
Even if I implement this technique in C++ I meet difficulty with calling destructors if this object is in some containers. Even if I create some kind of a field 'IsDead' and remove dead object from all lists, I had to free memory.
After reading some articles I have an idea that I should have one 'main' container with shared_ptr to all my objects, and other containers should store weak_ptr to them, so only main container checks object's status and others look only at shared_ptr. Are my intentions correct or is there another solution?
It sounds like you're looking for shared_ptr<T>.
http://msdn.microsoft.com/en-us/library/bb982026.aspx
This is a reference counted ptr in C++ that enables easy sharing of objects. The shared_ptr<T> can be freely handed out to several objects. As the shared_ptr instances are copied around and destucted the internal reference counter will be updated appropriately. When all references are removed the underlying data will be deleted.

Boost shared_ptr use_count function

My application problem is the following -
I have a large structure foo. Because these are large and for memory management reasons, we do not wish to delete them when processing on the data is complete.
We are storing them in std::vector<boost::shared_ptr<foo>>.
My question is related to knowing when all processing is complete. First decision is that we do not want any of the other application code to mark a complete flag in the structure because there are multiple execution paths in the program and we cannot predict which one is the last.
So in our implementation, once processing is complete, we delete all copies of boost::shared_ptr<foo>> except for the one in the vector. This will drop the reference counter in the shared_ptr to 1. Is it practical to use shared_ptr.use_count() to see if it is equal to 1 to know when all other parts of my app are done with the data.
One additional reason I'm asking the question is that the boost documentation on the shared pointer shared_ptr recommends not using "use_count" for production code.
Edit -
What I did not say is that when we need a new foo, we will scan the vector of foo pointers looking for a foo that is not currently in use and use that foo for the next round of processing. This is why I was thinking that having the reference counter of 1 would be a safe way to ensure that this particular foo object is no longer in use.
My immediate reaction (and I'll admit, it's no more than that) is that it sounds like you're trying to get the effect of a pool allocator of some sort. You might be better off overloading operator new and operator delete to get the effect you want a bit more directly. With something like that, you can probably just use a shared_ptr like normal, and the other work you want delayed, will be handled in operator delete for that class.
That leaves a more basic question: what are you really trying to accomplish with this? From a memory management viewpoint, one common wish is to allocate memory for a large number of objects at once, and after the entire block is empty, release the whole block at once. If you're trying to do something on that order, it's almost certainly easier to accomplish by overloading new and delete than by playing games with shared_ptr's use_count.
Edit: based on your comment, overloading new and delete for class sounds like the right thing to do. If anything, integration into your existing code will probably be easier; in fact, you can often do it completely transparently.
The general idea for the allocator is pretty much the same as you've outlined in your edited question: have a structure (bitmaps and linked lists are both common) to keep track of your free objects. When new needs to allocate an object, it can scan the bit vector or look at the head of the linked list of free objects, and return its address.
This is one case that linked lists can work out quite well -- you (usually) don't have to worry about memory usage, because you store your links right in the free object, and you (virtually) never have to walk the list, because when you need to allocate an object, you just grab the first item on the list.
This sort of thing is particularly common with small objects, so you might want to look at the Modern C++ Design chapter on its small object allocator (and an article or two since then by Andrei Alexandrescu about his newer ideas of how to do that sort of thing). There's also the Boost::pool allocator, which is generally at least somewhat similar.
If you want to know whether or not the use count is 1, use the unique() member function.
I would say your application should have some method that eliminates all references to the Foo from other parts of the app, and that method should be used instead of checking use_count(). Besides, if use_count() is greater than 1, what would your program do? You shouldn't be relying on shared_ptr's features to eliminate all references, your application architecture should be able to eliminate references. As a final check before removing it from the vector, you could assert(unique()) to verify it really is being released.
I think you can use shared_ptr's custom deleter functionality to call a particular function when the last copy has been released. That way, you're not using use_count at all.
You would need to hold something other than a copy of the shared_ptr in your vector so that the shared_ptr is only tracking the outstanding processing.
Boost has several examples of custom deleters in the shared_ptr docs.
I would suggest that instead of trying to use the shared_ptr's use_count to keep track, it might be better to implement your own usage counter. this way you will have full control over this rather than using the shared_ptr's one which, as you rightly suggest, is not recommended. You can also pre-set your own counter to allow for the number of threads you know will need to act on the data, rather than relying on them all being initialised at the beginning to get their copies of the structure.