Associative container with queue-like properties on the values - c++

I have data in the form of instances of a POD struct. Under "normal" conditions, I need to access them via a unique ID, currently via a std::map. If something goes wrong, however, I need to traverse the data in an order provided by a specific member of the POD struct.
I don't want to copy all data sets of the map to a priority queue in case of error -- this seems much to expensive.
I tried running std::make_heap on the std::map, but that doesn't even compile, because the map's iterators cannot be subtracted.
The sort keys will change on a regular basis, so keeping the data in a priority queue and just storing pointers in the map seems not to be feasible, especially as access via the map (the typical use-case) becomes more expensive by the indirection.
The other way around, i.e. storing pointers in a separate data structure that can be heapified on demand seems feasible, but the synchronization might be error-prone.
Is there anything in the std libraries, boost or tbb that would accomplish what I want?

It looks like this is a work for the Boost Multi-Index Containters Library.

Related

Mapping different keys to the same value in c++

So I searched around and found that we can use multi-map to map a set of equivalent keys to different values. I am wondering how to do the reverse? That is to map different keys to the same value? Is there a stl data structure for that?
A normal std::map (or std::unordered_map) can do that. You have a few choices on how to do that:
Normal key-value store, where you copy the values as needed
Normal key-value store, but where the value is a pointer to the actual data that can be shared, optionally using a std::shared_ptr
Use some kind of structure (a std::tuple perhaps) to collect the keys (though it's probably more trouble handling this than it's worth, nothing I really recommend)
Use Boost multi-index containers
With Boost you have a ready solution that you can just use, and if you already use Boost in your project (it is a great set of libraries, and complements the standard library well) I think it's a no-brainer to pick that solution.
You can have a std::map such that multiple keys map to the same value. There is nothing against that. However, what you are probably looking for is the ability to to find all the keys that map to a given value without iterating over the entire map. I don't think there is any such data structure for that.

Which is the fastest STL container for find?

Alright as a preface I have a need to cache a relatively small subset of rarely modified data to avoid querying the database as frequently for performance reasons. This data is heavily used in a read-only sense as it is referenced often by a much larger set of data in other tables.
I've written a class which will have the ability to store basically the entirety of the two tables in question in memory while listening for commit changes in conjunction with a thread safe callback mechanism for updating the cached objects.
My current implementation has two std::vectors one for the elements of each table. The class provides both access to the entirety of each vector as well as convenience methods for searching for a specific element of table data via std::find, std::find_if, etc.
Does anyone know if using std::list, std::set, or std::map over std::vector for searching would be preferable? Most of the time that is what will be requested of these containers after populating once from the database when a new connection is made.
I'm also open to using C++0x features supported by VS2010 or Boost.
For searching a particular value, with std::set and std::map it takes O(log N) time, while with the other two it takes O(N) time; So, std::set or std::map are probably better. Since you have access to C++0x, you could also use std::unordered_set or std::unordered_map which take constant time on average.
For find_if, there's little difference between them, because it takes an arbitrary predicate and containers cannot optimize arbitrarily, of course.
However if you will be calling find_if frequently with a certain predicate, you can optimize yourself: use a std::map or std::set with a custom comparator or special keys and use find instead.
A sorted vector using std::lower_bound can be just as fast as std::set if you're not updating very often; they're both O(log n). It's worth trying both to see which is better for your own situation.
Since from your (extended) requirements you need to search on multiple fields, I would point you to Boost.MultiIndex.
This Boost library lets you build one container (with only one exemplary of each element it contains) and index it over an arbitrary number of indices. It also lets you precise which indices to use.
To determine the kind of index to use, you'll need extensive benchmarks. 500 is a relatively low number of entries, so constant factors won't play nicely. Furthermore, there can be a noticeable difference between single-thread and multi-thread usage (most hash-table implementations can collapse on MT usage because they do not use linear-rehashing, and thus a single thread ends up rehashing the table, blocking all others).
I would recommend a sorted index (skip-list like, if possible) to accomodate range requests (all names beginning by Abc ?) if the performance difference is either unnoticeable or simply does not matter.
If you only want to search for distinct values, one specific column in the table, then std::hash is fastest.
If you want to be able to search using several different predicates, you will need some kind of index structure. It can be implemented by extending your current vector based approach with several hash tables or maps, one for each field to search for, where the value is either an index into the vector, or a direct pointer to the element in the vector.
Going further, if you want to be able to search for ranges, such as all occasions having a date in July you need an ordered data structure, where you can extract a range.
Not an answer per se, but be sure to use a typedef to refer to the container type you do use, something like typedef std::vector< itemtype > data_table_cache; Then use your typedef type everywhere.

Associating and iterating in C++

I've got a situation where I want to use an associative container, and I chose to use a std::unordered_map, because it's perfectly feasible that this container could be used to hold millions or more of elements. But now I also need to iterate in order. I considered having the value types link to each other in a list, but now I'm going to have issues with memory management.
Should I change container, say to a std::map? Or just iterate once through my unordered_map, insert into a vector, and sort, then iterate? It's pretty unlikely that I will need to iterate in an ordered fashion repeatedly.
Well, you know the O() of the various operations of the two alternatives you've picked. You should pick based on that and do a cost/benefit analysis based on where you need the performance to happen and which container does best for THAT.
Of course, I couldn't possibly know enough to do that analysis for you.
You could use Boost.MultiIndex, specifying the unordered (hashed) index as well as an ordered one, on the same underlying object collection.
Possible issues with this - there is no natural mapping from an existing associative container model, and it might be overkill if you don't need the second index all the time.

What are Containers/Adapters? C++

What are containers/adapters? I have basic knowledge of C++ and its sub-topics like (class/templates/STL).
Can anyone please explain in layman's language and give me a practical example of the application of containers/adapters?
A container is a specific data structure that contains data, usually in an unbounded amount. Each container type has limitations on how to access, add, or remove data efficiently.
Below are a few examples of containers using STL classes.
Sequence Containers
Here are the sequence containers, meaning the data is reliably ordered (that is, there is a front and a back to them. I do NOT mean that they automatically sort themselves!).
A vector is a bit like a flexibly-sized array. Vectors are random-access, meaning you can access any element with integer index in constant time (just like an array). You can add or remove from the back of the vector in amortized constant time as well. Anywhere else, though, and you're probably looking at having to recopy potentially all of the elements.
A deque, or double-ended queue, is like a vector but you can add to the front or the back in amortized constant time. You can still access elements in constant time, but deque elements are not guaranteed to be contiguous in memory like vectors or arrays.
A list is a linked list, meaning data which are linked together by pointers. You have constant-time access to the beginning and the end, but in order to get anywhere in the middle you need to iterate through the list. You can add elements anywhere in the list in constant time, though, if you already have a pointer to one of the nearby nodes.
Associative Containers
These are associative containers, meaning that elements are no longer ordered but instead have associations with each other used for determining uniqueness or mappings:
A set is a container with unique elements. You can only add one of each element to a set; any other additions are ignored.
A multiset is like a set, but you can put more than one of an element in. The multiset keeps track of how many of each kind of element are in the structure.
A map, also known as an associative array, is a structure in which you insert key-value pairs; then you can look up any value by supplying the key. So it's a bit like an array that you can access with a string index (key), or any other kind of index. (If you insert another key-value pair and the key already exists, then you just overwrite the value for the original key.)
A multimap is a map that allows for insertion of multiple values for the same key. When you do a key lookup, you get back a container with all the values in it.
Container Adapters
Container adapters, on the other hand, are interfaces created by limiting functionality in a pre-existing container and providing a different set of functionality. When you declare the container adapters, you have an option of specifying which sequence containers form the underlying container. These are:
A stack is a container providing Last-In, First-Out (LIFO) access. Basically, you remove elements in the reverse order you insert them. It's difficult to get to any elements in the middle. Usually this goes on top of a deque.
A queue is a container providing First-In, First-Out (FIFO) access. You remove elements in the same order you insert them. It's difficult to get to any elements in the middle. Usually this goes on top of a deque.
A priority_queue is a container providing sorted-order access to elements. You can insert elements in any order, and then retrieve the "lowest" of these values at any time. Priority queues in C++ STL use a heap structure internally, which in turn is basically array-backed; thus, usually this goes on top of a vector.
See this reference page for more information, including time complexity for each of the operations and links to detailed pages for each of the container types.
<joke>C++ is technical and hard to understand :-D</joke>
Containers are data types from STL that can contain data.
Example: vector as a dynamic array
Adapters are data types from STL that adapt a container to provide specific interface.
Example: stack providing stack interface on top of the chosen container
(side note: both are actually templates not data types, but the definition looks better this way)
The technical definition of "container" from The SGI STL documentation is pretty good:
A Container is an object that stores other objects (its elements), and that has methods for accessing its elements. In particular, every type that is a model of Container has an associated iterator type that can be used to iterate through the Container's elements.
So, a container is a data structure that holds ("contains") a collection of objects of some type. The key idea is that there are different types of containers, each of which stores objects in a different way and provides different performance characteristics, but all of them have a standard interface so that you can swap one out for another easily and without modifying too much of the code that uses the container. The idea is that the containers are designed to be interchangeable as much as possible.
The container adapters are classes that provide a subset of a container's functionality but may provide additional functionality that makes it easier to use containers for certain scenarios. For example, you could easily use std::vector or std::deque for a stack data structure and call push_back, back, and pop_back as the stack interface; std::stack provides an interface that can use a std::vector or std::deque or other sequence container but provides the more standard push, top, and pop member functions for accessing members.

Least Recently Used cache using C++

I am trying to implement LRU Cache using C++ . I would like to know what is the best design for implementing them. I know LRU should provide find(), add an element and remove an element. The remove should remove the LRU element. what is the best ADTs to implement this
For ex: If I use a map with element as value and time counter as key I can search in O(logn) time, Inserting is O(n), deleting is O(logn).
One major issue with LRU caches is that there is little "const" operations, most will change the underlying representation (if only because they bump the element accessed).
This is of course very inconvenient, because it means it's not a traditional STL container, and therefore any idea of exhibiting iterators is quite complicated: when the iterator is dereferenced this is an access, which should modify the list we are iterating on... oh my.
And there are the performances consideration, both in term of speed and memory consumption.
It is unfortunate, but you'll need some way to organize your data in a queue (LRU) (with the possibility to remove elements from the middle) and this means your elements will have to be independant from one another. A std::list fits, of course, but it's more than you need. A singly-linked list is sufficient here, since you don't need to iterate the list backward (you just want a queue, after all).
However one major drawback of those is their poor locality of reference, if you need more speed you'll need to provide your own custom (pool ?) allocator for the nodes, so that they are kept as close together as possible. This will also alleviate heap fragmentation somewhat.
Next, you obviously need an index structure (for the cache bit). The most natural is to turn toward a hash map. std::tr1::unordered_map, std::unordered_map or boost::unordered_map are normally good quality implementation, some should be available to you. They also allocate extra nodes for hash collision handling, you might prefer other kinds of hash maps, check out Wikipedia's article on the subject and read about the characteristics of the various implementation technics.
Continuing, there is the (obvious) threading support. If you don't need thread support, then it's fine, if you do however, it's a bit more complicated:
As I said, there is little const operation on such a structure, thus you don't really need to differentiate Read/Write accesses
Internal locking is fine, but you might find that it doesn't play nice with your uses. The issue with internal locking is that it doesn't support the concept of "transaction" since it relinquish the lock between each call. If this is your case, transform your object into a mutex and provide a std::unique_ptr<Lock> lock() method (in debug, you can assert than the lock is taken at the entry point of each method)
There is (in locking strategies) the issue of reentrance, ie the ability to "relock" the mutex from within the same thread, check Boost.Thread for more information about the various locks and mutexes available
Finally, there is the issue of error reporting. Since it is expected that a cache may not be able to retrieve the data you put in, I would consider using an exception "poor taste". Consider either pointers (Value*) or Boost.Optional (boost::optional<Value&>). I would prefer Boost.Optional because its semantic is clear.
The best way to implement an LRU is to use the combination of a std::list and stdext::hash_map (want to use only std then std::map).
Store the data in the list so that
the least recently used in at the
last and use the map to point to the
list items.
For "get" use the map to get the
list addr and retrieve the data
and move the current node to the
first(since this was used now) and update the map.
For "insert" remove the last element
from the list and add the new data
to the front and update the map.
This is the fastest you can get, If you are using a hash_map you should almost have all the operations done in O(1). If using std::map it should take O(logn) in all cases.
A very good implementation is available here
This article describes a couple of C++ LRU cache implementations (one using STL, one using boost::bimap).
When you say priority, I think "heap" which naturally leads to increase-key and delete-min.
I would not make the cache visible to the outside world at all if I could avoid it. I'd just have a collection (of whatever) and handle the caching invisibly, adding and removing items as needed, but the external interface would be exactly that of the underlying collection.
As far as the implementation goes, a heap is probably the most obvious. It has complexities roughly similar to a map, but instead of building a tree from linked nodes, it arranges items in an array and the "links" are implicit based on array indices. This increases the storage density of your cache and improves locality in the "real" (physical) processor cache.
I suggest a heap and maybe a Fibonacci Heap
I'd go with a normal heap in C++.
With the std::make_heap (guaranteed by the standard to be O(n)), std::pop_heap, and std::push_heap in #include, implementing it would be absolutely cake. You only have to worry about increase-key.