Is there any data structure in C++11/STL/Boost which represents an array with a generic index type or do I have to implement such a type on my own?
I.e. I would like to do something like this:
std::set<std::string>> to_lookup, to_lookup2;
int i = 10, j = 13;
// initialization of to_lookup
// count is of the container type/data structure I am looking for
count[to_lookup] = i;
count[to_lookup2] = j;
I know the std::map and std::unordered_map containers from the STL but those do not match my requirements. It is critical for me that insert and look up can be done in O(1).
This is basically almost impossible to become significantly faster if you're using std::unordered_map already simply due to the fact that there's always some overhead based on the number of elements (so you can't get a perfect O(1) (unless you're able to reference all possible keys as indexes in an array)).
However, if you still think that a std::unordered_map is too slow simply due to the sheer amount of entries, try adding another layer reducing the number of elements in a map.
In your example, using std::string as keys(?), you could just use the very first character (untested but should work):
std::vector<std::unordered_map<const std::string, myWhateverType> > container(256);
// To access an element, this just adds one more layer:
container[key[0]][key] = value;
Iterating over all elements becomes a bit more complicated though. However, this essentially reduces the number of elements in your std::unordered_map to 1/255 (depending on the actual distribution of key values of course; if all keys start with something such as key, then you won't gain anything other than a small overhead).
Will it improve performance? This really depends on the number of entries and your keys.
Related
I wanted to implement something, that maps an unordered set of integers to an integer value. Some kind of C++ equivalent of Python dict, which has sets as keys and ints as values.
So far I used std::map<std::set<int>, int> set_lookup; but from what I understood this is unnecessarily slow as it uses trees. I don't care about the ordering, only speed is important.
From what I have understand, the desired structure is std::unordered_map<std::unordered_set<int>, int, hash> set_lookup; which needs a hash function to work.
Is this the right approach? And how would a minimum running example look like? I couldn't find how the hash part should look like.
It isn't clear whether you ask about the syntax for defining a hash function, or about how to define a mathematically good hash for a set of ints.
Anyway - in case it is the former, here is how you should technically define a hash function for your case:
template <>
struct hash<std::unordered_set<int>>
{
std::size_t operator()(const std::unordered_set<int>& k) const
{
using std::size_t;
using std::hash;
using std::string;
// ...
// Here you should create and return a meaning full hash value:
return 5;
}
};
void main()
{
std::unordered_map<std::unordered_set<int>, int> m;
}
Having written that, I join the other comments about whether it is a good direction to solve your problem.
You haven't described your problem, so I cannot answer that.
I understood [std::map<std::set<int>, int> set_lookup;] is unnecessarily slow as it uses trees.
Is [std::unordered_map<std::unordered_set<int>, int, hash>] the right approach?
It depends. If your keys are created then not changed, and you want to be able to do a lot of lookups very fast, then a hash-table based approach would indeed be good, but you'll need two things for that:
to be able to hash keys
to be able to compare keys
To hash keys, deciding on a good hash function is a bit of an art form. A rarely bad - but sometimes slower than necessary - approach is to use boost hash_combine (which is short enough that you can copy it into your code - see here for the implementation). If your integer values are already quite random across most of their bits, though, simply XORing them together would produce a great hash. If you're not sure, use hash_combine or a better hash (e.g. MURMUR32). The time taken to hash will depend on the time to traverse, and traversing an unordered_set typically involves a linked list traversal (which typically jumps around in memory pages and is CPU cache unfriendly). The best way to store the values for fast traversal is in contiguous memory - i.e. a std::vector<>, or std::array<> if the size is known at compile time.
The other thing you need to do is compare keys for equality: that also works fastest when elements in the key are contiguous in memory, and consistently ordered. Again, a sorted std::vector<> or std::array<> would be best.
That said, if the sets for your keys are large, and you can compromise on a statistical guarantee of key equality, you could use e.g. a 256-bit hash and code as if hash collisions always correspond to key equality. That's often not an acceptable risk, but if your hash is not collision prone and you have e.g. a 256 bit hash, a CPU could run flat-chat for millennia hashing distinct keys and still be unlikely to produce the same hash even once, so it is a use I've seen even financial firms use in their core in-house database products, as it can save so much time.
If you're tempted by that compromise, you'd want std::unordered_map<HashValue256, std::pair<int, std::vector<int>>>. To find the int associated with a set of integers, you'd hash them first, then do a lookup. It's easy to write a hash function that produces the same output for a set or sorted vector<> or array<>, as you can present the elements to something like hash_combine in the same sorted order during traversal (i.e. just size_t seed = 0; for (auto& element : any_sorted_container) hash_combine(seed, element);). Storing the vector<int> means you can traverse the unordered_map later if you want to find all the key "sets" - if you don't need to do that (e.g. you're only ever looking up the ints by keys known to the code at the time, and you're comfortable with the statistical improbability of a good hash colliding, you don't even need to store the keys/vectors): std::unordered_map<HashValue256, int>.
I want to keep a data structure for storing all the elements that I have seen till now. Considering that keeping an array for this is out of question as elements can be of the order of 10^9, what data structure should I use for achieving this : unordered_map or unordered_set in C++ ?
Maximum elements that will be visited in worst case : 10^5
-10^9 <= element <= 10^9
As #MikeCAT said in the comments, a map would only make sense if you wanted to store additional information about the element or the visitation. But if you wanted only to store the truth value of whether the element has been visited or not, the map would look something like this:
// if your elements were strings
std::unordered_map<std::string, bool> isVisited;
and then this would just be a waste of space. Storing the truth value is redundant, if the mere presence of the string within the map already indicates that it has been visited. Let's see a comparison:
std::unordered_map<std::string, bool> isVisitedMap;
std::unordered_set<std::string> isVisitedSet;
// Visit some places
isVisitedMap["madrid"] = true;
isVisitedMap["london"] = true;
isVisitedSet.insert("madrid");
isVisitedSet.insert("london");
// Maybe the information expires so you want to remove them
isVisitedMap["london"] = false;
isVisitedSet.erase("london");
Now the elements stored in each structure will be:
For the map:
{{"london", false}, {"madrid", true}} <--- 4 elements
{"madrid"} <--- 1 element. Much better
In a project in which I had a binary tree converted to a binary DAG for optimization purposes (GRAPHGEN) I passed the exploration function a map from node pointers to bool:
std::map<BinaryDrag<conact>::node*, bool> &visited_fl
The map kept track of the pointers in order not to go through the same nodes again when doing multiple passes.
You could use a std::unordered_map<Value, bool>.
I want to keep a data structure for storing all the elements that I have seen till now.
A way to re-phrase that is to say "I want a data structure to store the set of all elements that I've seen till now". The clue is in the name. Without more information, std::unordered_set seems like a reasonable choice to represent a set.
That said, in practice it depends on details like what you're planning to do with this set. Array can be a good choice as well (yes, even for billions of elements), other set implementations may be better and maps can be useful in some use cases.
How does C++ STL unordered_map resolve collisions?
Looking at the http://www.cplusplus.com/reference/unordered_map/unordered_map/, it says "Unique keys
No two elements in the container can have equivalent keys."
That should mean that the container is indeed resolving collisions. However, that page does not tell me how it is doing it. I know some ways to resolve collisions like using linked lists and/or probing. What I want to know is how the c++ STL unordered_map is resolving it.
The standard defines a little more about this than most people seem to realize.
Specifically, the standard requires (ยง23.2.5/9):
The elements of an unordered associative container are organized into buckets. Keys with the same hash code appear in the same bucket.
The interface includes a bucket_count that runs in constant time. (table 103). It also includes a bucket_size that has to run in time linear on the size of the bucket.
That's basically describing an implementation that uses collision chaining. When you do use collision chaining, meeting all the requirements is somewhere between easy and trivial. bucket_count() is the number of elements in your array. bucket_size() is the number of elements in the collision chain. Getting them in constant and linear time respectively is simple and straightforward.
By contrast, if you use something like linear probing or double hashing, those requirements become all but impossible to meet. Specifically, all the items that hashed to a specific value need to land in the same bucket, and you need to be able to count those buckets in constant time.
But, if you use something like linear probing or double hashing, finding all the items that hashed to the same value means you need to hash the value, then walk through the "chain" of non-empty items in your table to find how many of those hashed to the same value. That's not linear on the number of items that hashed to the same value though--it's linear on the number of items that hashed to the same or a colliding value.
With enough extra work and a fair amount of stretching the meaning of some of the requirements almost to the breaking point, it might be barely possible to create a hash table using something other than collision chaining, and still at least sort of meet the requirements--but I'm not really certain it's possible, and it would certain involve quite a lot of extra work.
Summary: all practical implementations of std::unordered_set (or unordered_map) undoubtedly use collision chaining. While it might be (just barely) possible to meet the requirements using linear probing or double hashing, such an implementation seems to lose a great deal and gain nearly nothing in return.
I found this answer looking for how to detect when my types are colliding, so I will post this in case that is the intent of the question.:
I believe there's some misconception about "Unique keys No two elements in the container can have equivalent keys."
look at the code below
//pseudocode
std::unordered_map<int, char> hashmap;
hashmap[5] = 'a';
hashmap[5] = 'b'; //replace 'a' with 'b', there is no collision being handled.
I think the Jerry's answer is referring to the internal system that it uses to shrink keys to appropriate array indices.
If you want collisions to be handled for your types (with buckets), you need std::unordered_multimap and will have to iterate over
Hopefully this code can be read without the context I generated it with.
it basically checks to see if any element in the bucket associated with the hash is the element I'm looking for.
//sp is std::shared_ptr
//memo is std::unordered_multimap< int, sp<AStarNode> >
//there's probably multiple issues with this code in terms of good design (like using int keys rather than unsigned)
bool AStar_Incremental::hasNodeBeenVisited(sp<AStarNode> node)
{
using UMIter = std::unordered_multimap<int, sp<AStarNode> >::iterator;
bool bAlreadyVisited = false;
//get all values for key in O(1*)
int hash = WorldGrid::hashGrid(node->location);
std::pair<UMIter, UMIter> start_end = memo.equal_range(hash); //bucket range
UMIter start = start_end.first;
UMIter end = start_end.second;
//hopefully this is implemented to be O(m) where m is the bucket size.
for(UMIter bucketIter = start; bucketIter != end; ++bucketIter)
{
sp<AStarNode> previousNode = bucketIter->second;
sf::Vector2i& previousVisit = previousNode->location;
if (previousVisit == node->location)
{
bAlreadyVisited = true;
break;
}
}
return bAlreadyVisited;
}
I have a settings which are stored in std::map. For example, there is WorldTime key with value which updates each main cycle iteration. I don't want to read it from map when I do need (it's also processed each frame), I think it's not fast at all. So, can I get pointer to the map's value and access it? The code is:
std::map<std::string, int> mSettings;
// Somewhere in cycle:
mSettings["WorldTime"] += 10; // ms
// Somewhere in another place, also called in cycle
DrawText(mSettings["WorldTime"]); // Is slow to call each frame
So the idea is something like:
int *time = &mSettings["WorldTime"];
// In cycle:
DrawText(&time);
How wrong is it? Should I do something like that?
Best use a reference:
int & time = mSettings["WorldTime"];
If the key doesn't already exist, the []-access will create the element (and value-initialize the mapped value, i.e. 0 for an int). Alternatively (if the key already exists):
int & time = *mSettings.find("WorldTime");
As an aside: if you have hundreds of thousands of string keys or use lookup by string key a lot, you might find that an std::unordered_map<std::string, int> gives better results (but always profile before deciding). The two maps have virtually identical interfaces for your purpose.
According to this answer on StackOverflow, it's perfectly OK to store a pointer to a map element as it will not be invalidated until you delete the element (see note 3).
If you're worried so much about performance then why are you using strings for keys? What if you had an enum? Like this:
enum Settings
{
WorldTime,
...
};
Then your map would be using ints for keys rather than strings. It has to do comparisons between the keys because I believe std::map is implemented as a balanced tree. Comparisons between ints are much faster than comparisons between strings.
Furthermore, if you're using an enum for keys, you can just use an array, because an enum IS essentially a map from some sort of symbol (ie. WorldTime) to an integer, starting at zero. So then do this:
enum Settings
{
WorldTime,
...
NumSettings
};
And then declare your mSettings as an array:
int mSettings[NumSettings];
Which has faster lookup time compared to a std::map. Reference like this then:
DrawText(mSettings[WorldTime]);
Since you're basically just accessing a value in an array rather than accessing a map this is going to be a lot faster and you don't have to worry about the pointer/reference hack you were trying to do in the first place.
Let's say we have read these values:
3
1241
124515
5322353
341
43262267234
1241
1241
3213131
And I have an array like this (with the elements above):
a[0]=1241
a[1]=124515
a[2]=43262267234
a[3]=3
...
The thing is that the elements' order in the array is not constant (I have to change it somewhere else in my program).
How can I know on which position does one element appear in the read document.
Note that I can not do:
vector <int> a[1000000000000];
a[number].push_back(all_positions);
Because a will be too large (there's a memory restriction). (let's say I have only 3000 elements, but they're values are from 0 to 2^32)
So, in the example above, I would want to know all the positions 1241 is appearing on without iterating again through all the read elements.
In other words, how can I associate to the number "1241" the positions "1,6,7" so I can simply access them in O(1) (where 1 actually is the number of positions the element appears)
If there's no O(1) I want to know what's the optimal one ...
I don't know if I've made myself clear. If not, just say it and I'll update my question :)
You need to use some sort of dynamic array, like a vector (std::vector) or other similar containers (std::list, maybe, it depends on your needs).
Such data structures are safer and easier to use than C-style array, since they take care of memory management.
If you also need to look for an element in O(1) you should consider using some structures that will associate both an index to an item and an item to an index. I don't think STL provides any, but boost should have something like that.
If O(log n) is a cost you can afford, also consider std::map
You can use what is commonly refered to as a multimap. That is, it stores Key and multiple values. This is an O(log) look up time.
If you're working with Visual Studios they provide their own hash_multimap, else may I suggest using Boost::unordered_map with a list as your value?
You don't need a sparse array of 1000000000000 elements; use an std::map to map positions to values.
If you want bi-directional lookup (that is, you sometimes want "what are the indexes for this value?" and sometimes "what value is at this index?") then you can use a boost::bimap.
Things get further complicated as you have values appearing more than once. You can sacrifice the bi-directional lookup and use a std::multimap.
You could use a map for that. Like:
std::map<int, std::vector<int>> MyMap;
So everytime you encounter a value while reading the file, you append it's position to the map. Say X is the value you read and Y is the position then you just do
MyMap[X].push_back( Y );
Instead of you array use
std::map<int, vector<int> > a;
You need an associative collection but you might want to associated with multiple values.
You can use std::multimap< int, int >
or
you can use std::map< int, std::set< int > >
I have found in practice the latter is easier for removing items if you just need to remove one element. It is unique on key-value combinations but not on key or value alone.
If you need higher performance then you may wish to use a hash_map instead of map. For the inner collection though you will not get much performance in using a hash, as you will have very few duplicates and it is better to std::set.
There are many implementations of hash_map, and it is in the new standard. If you don't have the new standard, go for boost.
It seems you need a std::map<int,int>. You can store the mapping such as 1241->0 124515->1 etc. Then perform a look up on this map to get the array index.
Besides the std::map solution offered by others here (O(log n)), there's the approach of a hash map (implemented as boost::unordered_map or std::unordered_map in C++0x, supported by modern compilers).
It would give you O(1) lookup on average, which often is faster than a tree-based std::map. Try for yourself.
You can use a std::multimap to store both a key (e.g. 1241) and multiple values (e.g. 1, 6 and 7).
An insert has logarithmic complexity, but you can speed it up if you give the insert method a hint where it can insert the item.
For O(1) lookup you could hash the number to find its entry (key) in a hash map (boost::unordered_map, dictionary, stdex::hash_map etc)
The value could be a vector of indices where the number occurs or a 3000 bit array (375 bytes) where the bit number for each respective index where the number (key) occurs is set.
boost::unordered_map<unsigned long, std::vector<unsigned long>> myMap;
for(unsigned long i = 0; i < sizeof(a)/sizeof(*a); ++i)
{
myMap[a[i]].push_back(i);
}
Instead of storing an array of integer, you could store an array of structure containing the integer value and all its positions in an array or vector.