Reference counting in a collection - c++

Let's have a collection of objects (say string is type of collection). I want each element of collection to have a reference count. So, on Add-Usage it should increment count for this given element.
coll.AddUsage("SomeElement"); // Type doesn't matter - but should increase count
On, Release-Usage, it should decrement the reference count for given element, and if count reaches 0, then it should remove the element from collection.
It is not important if AddUsage will allocate element (and set reference-count to 1), or would fail altogether (since element didn't exist). Important thing is RemoveUsage, which should remove given element (object) from collection.
I thought of using vector of a pair (or a custom struct), or using any kind of map/multimap. There exists no existing class in C++ library (may be out of thread-support library, one atomic classes, shared-pointer classes etc).
Question:
So, my question is how to implement such idea, using existing C++ library? It should be thread safe. Yes, C++11/14 is perfectly okay for me. If good idea is there, I would probably craft it on top of templates.

Assuming you ask for a data structure to implement your reference-counting collection...
Use a map<K,V> with K as the type of collection elements (in your example string) and V a type to keep track of meta-information about the element (e.g. reference count). The simplest case is when V is int.
Then, AddUsage is simple, just do refMap[value]++. For RemoveUsage just do a refMap[value]--, then check if the counter hit zero and remove the value from the map.
You need to add error handling too, since AddUsage / RemoveUsage may be
called with an object which is not in the map (not added to the collection)
EDIT: You tagged your question with "multithreading", so you probably want to have a mutex of some sort which guards the concurrent access to refMap.

You could implement something similar to shared_ptr class but extending it to hold collection of objects.
Like you could design a class with map/multimap as its data member. Key would be your object and value be your reference count.As far as interface is concerned just expose two methods:-
AddUsage(Object);
RemoveUsage(Object);
In your AddUsage method you would first check if element already exists in map.If yes then only increment the count. Likewise you would handle RemoveUsage.Object would be deleted from map if its reference count reaches zero.
This is just my opinion. Please let me know if there are any bottlenecks in this implementation.

You can use static member(integer) variable in the structure or class. Increment or decrement whereever you want. Remove the element if the value is zero.

Related

C++11 unordered_map time complexity

I'm trying to figure out the best way to do a cache for resources. I am mainly looking for native C/C++/C++11 solutions (i.e. I don't have boost and the likes as an option).
What I am doing when retrieving from the cache is something like this:
Object *ResourceManager::object_named(const char *name) {
if (_object_cache.find(name) == _object_cache.end()) {
_object_cache[name] = new Object();
}
return _object_cache[name];
}
Where _object_cache is defined something like: std::unordered_map <std::string, Object *> _object_cache;
What I am wondering is about the time complexity of doing this, does find trigger a linear-time search or is it done as some kind of a look-up operation?
I mean if I do _object_cache["something"]; on the given example it will either return the object or if it doesn't exist it will call the default constructor inserting an object which is not what I want. I find this a bit counter-intuitive, I would have expected it to report in some way (returning nullptr for example) that a value for the key couldn't be retrieved, not second-guess what I wanted.
But again, if I do a find on the key, does it trigger a big search which in fact will run in linear time (since the key will not be found it will look at every key)?
Is this a good way to do it, or does anyone have some suggestions, perhaps it's possible to use a look up or something to know if the key is available or not, I may access often and if it is the case that some time is spent searching I would like to eliminate it or at least do it as fast as possible.
Thankful for any input on this.
The default constructor (triggered by _object_cache["something"]) is what you want; the default constructor for a pointer type (e.g. Object *) gives nullptr (8.5p6b1, footnote 103).
So:
auto &ptr = _object_cache[name];
if (!ptr) ptr = new Object;
return ptr;
You use a reference into the unordered map (auto &ptr) as your local variable so that you assign into the map and set your return value in the same operation. In C++03 or if you want to be explicit, write Object *&ptr (a reference to a pointer).
Note that you should probably be using unique_ptr rather than a raw pointer to ensure that your cache manages ownership.
By the way, find has the same performance as operator[]; average constant, worst-case linear (only if every key in the unordered map has the same hash).
Here's how I'd write this:
auto it = _object_cache.find(name);
return it != _object_cache.end()
? it->second
: _object_cache.emplace(name, new Object).first->second;
The complexity of find on an std::unordered_map is O(1) (constant), specially with std::string keys which have good hashing leading to very low rate of collisions. Even though the name of the method is find, it doesn't do a linear scan as you pointed out.
If you want to do some kind of caching, this container is definitely a good start.
Note that a cache typically is not just a fast O(1) access but also a bounded data structure. The std::unordered_map will dynamically increase its size when more and more elements are added. When resources are limited (e.g. reading huge files from disk into memory), you want a bounded and fast data structure to improve the responsiveness of your system.
In contrast, a cache will use an eviction strategy whenever size() reaches capacity(), by replacing the least valuable element.
You can implement a cache on top of a std::unordered_map. The eviction strategy can then be implemented by redefining the insert() member. If you want to go for an N-way (for small and fixed N) associative cache (i.e. one item can replace at most N other items), you could use the bucket() interface to replace one of the bucket's entries.
For a fully associative cache (i.e. any item can replace any other item), you could use a Least Recently Used eviction strategy by adding a std::list as a secondary data structure:
using key_tracker_type = std::list<K>;
using key_to_value_type = std::unordered_map<
K,std::pair<V,typename key_tracker_type::iterator>
>;
By wrapping these two structures inside your cache class, you can define the insert() to trigger a replace when your capacity is full. When that happens, you pop_front() the Least Recently Used item and push_back() the current item into the list.
On Tim Day's blog there is an extensive example with full source code that implements the above cache data structure. It's implementation can also be done efficiently using Boost.Bimap or Boost.MultiIndex.
The insert/emplace interfaces to map/unordered_map are enough to do what you want: find the position, and insert if necessary. Since the mapped values here are pointers, ekatmur's response is ideal. If your values are fully-fledged objects in the map rather than pointers, you could use something like this:
Object& ResourceManager::object_named(const char *name, const Object& initialValue) {
return _object_cache.emplace(name, initialValue).first->second;
}
The values name and initialValue make up arguments to the key-value pair that needs to be inserted, if there is no key with the same value as name. The emplace returns a pair, with second indicating whether anything was inserted (the key in name is a new one) - we don't care about that here; and first being the iterator pointing to the (perhaps newly created) key-value pair entry with key equivalent to the value of name. So if the key was already there, dereferencing first gives the original Ojbect for the key, which has not been overwritten with initialValue; otherwise, the key was newly inserted using the value of name and the entry's value portion copied from initialValue, and first points to that.
ekatmur's response is equivalent to this:
Object& ResourceManager::object_named(const char *name) {
bool res;
auto iter = _object_cache.end();
std::tie(iter, res) = _object_cache.emplace(name, nullptr);
if (res) {
iter->second = new Object(); // we inserted a null pointer - now replace it
}
return iter->second;
}
but profits from the fact that the default-constructed pointer value created by operator[] is null to decide whether a new Object needs to be allocated. It's more succinct and easier to read.

C++ Deleting objects from memory

Lets say I have allocated some memory and have filled it with a set of objects of the same type, we'll call these components.
Say one of these components needs to be removed, what is a good way of doing this such that the "hole" created by the component can be tested for and skipped by a loop iterating over the set of objects?
The inverse should also be true, I would like to be able to test for a hole in order to store new components in the space.
I'm thinking menclear & checking for 0...
boost::optional<component> seems to fit your needs exactly. Put those in your storage, whatever that happens to be. For example, with std::vector
// initialize the vector with 100 non-components
std::vector<boost::optional<component>> components(100);
// adding a component at position 15
components[15].reset(component(x,y,z));
// deleting a component at position 82
componetnts[82].reset()
// looping through and checking for existence
for (auto& opt : components)
{
if (opt) // component exists
{
operate_on_component(*opt);
}
else // component does not exist
{
// whatever
}
}
// move components to the front, non-components to the back
std::parition(components.begin(), components.end(),
[](boost::optional<component> const& opt) -> bool { return opt; });
The short answer is it depends on how you store it in memmory.
For example, the ansi standard suggests that vectors be allocated contiguously.
If you can predict the size of the object, you may be able to use a function such as size_of and addressing to be able to predict the location in memory.
Good luck.
There are at least two solutions:
1) mark hole with some flag and then skip it when processing. Benefit: 'deletion' is very fast (only set a flag). If object is not that small even adding a "bool alive" flag can be not so hard to do.
2) move a hole at the end of the pool and replace it with some 'alive' object.
this problem is related to storing and processing particle systems, you could find some suggestions there.
If it is not possible to move the "live" components up, or reorder them such that there is no hole in the middle of the sequence, then the best option if to give the component objects a "deleted" flag/state that can be tested through a member function.
Such a "deleted" state does not cause the object to be removed from memory (that is just not possible in the middle of a larger block), but it does make it possible to mark the spot as not being in use for a component.
When you say you have "allocated some memory" you are likely talking about an array. Arrays are great because they have virtually no overhead and extremely fast access by index. But the bad thing about arrays is that they aren't very friendly for resizing. When you remove an element in the middle, all following elements have to be shifted back by one position.
But fortunately there are other data structures you can use, like a linked list or a binary tree, which allow quick removal of elements. C++ even implements these in the container classes std::list and std::set.
A list is great when you don't know beforehand how many elements you need, because it can shrink and grow dynamically without wasting any memory when you remove or add any elements. Also, adding and removing elements is very fast, no matter if you insert them at the beginning, in the end, or even somewhere in the middle.
A set is great for quick lookup. When you have an object and you want to know if it's already in the set, checking it is very quick. A set also automatically discards duplicates which is really useful in many situations (when you need duplicates, there is the std::multiset). Just like a list it adapts dynamically, but adding new objects isn't as fast as in a list (not as expensive as in an array, though).
Two suggestions:
1) You can use a Linked List to store your components, and then not worry about holes.
Or if you need these holes:
2) You can wrap your component into an object with a pointer to the component like so:
class ComponentWrap : public
{
Component component;
}
and use ComponentWrap.component == null to find if the component is deleted.
Exception way:
3) Put your code in a try catch block in case you hit a null pointer error.

FastRemoveObject in CCArray will change the positions of objects?

I was told that if relying on a specific ordering of objects, I should not use the fastRemoveObject methods in CCArray. Cocos2d API references don't show the contents of the method specifically. Can anyone tell me the reason?
Yes, fastRemoveObject changes the order of nodes. It is therefore not recommended unless it really doesn't matter in your case.
What it does is the following:
assign object at last index to index of object being removed
nil last object
decrease array count
That way the array will not have to perform memory operations (hence: fast). But the last object will now be at the index of the removed object.

Problem counting words from a phrase without using std::map

i want to see the number of appearance of words from some phrases.
My problem is that i can't use map to do this:
map[word] = appearnce++;
Instead i have a class that uses binary tree and behaves like a map, but i only have the method:
void insert(string, int);
Is there a way to counts the words apperances using this function?(because i can't find a way to increment the int for every different word) Or do I have to overload operator [] for the class? What should i do ?
Presumably you also have a way to retrieve data from your map-like structure (storing data does little good unless you can also retrieve it). The obvious method would be to retrieve the current value, increment it, and store the result (or store 1 if retrieving showed the value wasn't present previously).
I guess this is homework and you're learning about binary trees. In that case I would implement operator[] to return a reference to the existing value (and if no value exists, default construct a value, insert it, and return that. Obviously operator[] will be implemented quite similarly to your insert method.
can you edit "insert" function?
if you can, you can add static variable that count the appearnces inside the function

what happens when you modify an element of an std::set?

If I change an element of an std::set, for example, through an iterator, I know it is not "reinserted" or "resorted", but is there any mention of if it triggers undefined behavior? For example, I would imagine insertions would screw up. Is there any mention of specifically what happens?
You should not edit the values stored in the set directly. I copied this from MSDN documentation which is somewhat authoritative:
The STL container class set is used
for the storage and retrieval of data
from a collection in which the values
of the elements contained are unique
and serve as the key values according
to which the data is automatically
ordered. The value of an element in a
set may not be changed directly.
Instead, you must delete old values
and insert elements with new values.
Why this is is pretty easy to understand. The set implementation will have no way of knowing you have modified the value behind its back. The normal implementation is a red-black tree. Having changed the value, the position in the tree for that instance will be wrong. You would expect to see all manner of wrong behaviour, such as exists queries returning the wrong result on account of the search going down the wrong branch of the tree.
The precise answer is platform dependant but as a general rule, a "key" (the stuff you put in a set or the first type of a map) is suppose to be "immutable". To put it simply, that should not be modified, and there is no such thing as automatic re-insertion.
More precisely, the member variables used for to compare the key must not be modified.
Windows vc compiler is quite flexible (tested with VC8) and this code compile:
// creation
std::set<int> toto;
toto.insert(4);
toto.insert(40);
toto.insert(25);
// bad modif
(*toto.begin())=100;
// output
for(std::set<int>::iterator it = toto.begin(); it != toto.end(); ++it)
{
std::cout<<*it<<" ";
}
std::cout<<std::endl;
The output is 100 25 40, which is obviously not sorted... Bad...
Still, such behavior is useful when you want to modify data not participating in the operator <. But you better know what you're doing: that's the price you get for being too flexible.
Some might prefer gcc behavior (tested with 3.4.4) which gives the error "assignment of read-only location". You can work around it with a const_cast:
const_cast<int&>(*toto.begin())=100;
That's now compiling on gcc as well, same output: 100 25 40.
But at least, doing so will probably makes you wonder what's happening, then go to stack overflow and see this thread :-)
You cannot do this; they are const. There exists no method by which the set can detect you making a change to the internal element, and as a result you cannot do so. Instead, you have to remove and reinsert the element. If you are using elements that are expensive to copy, you may have to switch to using pointers and custom comparators (or switch to a C++1x compiler that supports rvalue references, which would make things a whole lot nicer).