hash table vs. linear list - list

Will there be a instance where a search for a keyword in a linear list will be quicker than a hash table?
I'd basically like to know if there is a fringe case where the search for a keyword in a linear list will be faster than a hash table search.
Thanks!

Searching in a hash table is not always constant-time in reality. If the hash function is a poor match for the data, you can have a lot of collisions, and in the extreme case that every data item has the same hash value, the result looks much like linear search. Depending on the details, this effective linear search might work slower than a linear search over the data in an array. (E.g. open addressing with a quadratic probing sequence, which makes poor use of the processor caches, might well be slower than a linear search over an array.)
Here's an example of a real-world case where all keys ended up in the same bucket: Java bug 4669519.

Yes, in the cases of a very small number of elements. Think about how a hash works. It has to compute the hash to find a bucket, then search through the list in that bucket. Plus it could be a complex multi level hash, etc. So you break even around the point where searching through a linear list is more work than the hash lookup algorithm.
Another instance would be if the element you are looking for is always at the beginning or near the beginning of a list. Depending on what you are doing it could happen.
There are others, but that should help you think about it.
Still, don't get confused. The hash is usually what you want.

Related

Storing filepath and size in C++

I'm processing a large number of image files (tens of millions) and I need to return the number of pixels for each file.
I have a function that uses an std::map<string, unsigned int> to keep track of files already processed. If a path is found in the map, then the value is returned, otherwise the file is processed and inserted into the map. I do not delete entries from the map.
The problem is as the number of entries grow, the time for lookup is killing the performance. This portion of my application is single threaded.
I wanted to know if unordered_map is the solution to this, or the fact that I'm using std::string as keys going to affects the hashing and require too many rehashings as the number of keys increases, thus once again killing the performance.
One other item to note is that the paths for the string are expected (but not guaranteed) to have the same prefix, for example: /common/until/here/now_different/. So all strings will likely have the same first N characters. I could potentially store these as relative to the common directory. How likely is that to help performance?
unordered_map will probably be better in this case. It will typically be implemented as a hash table, with amortized O(1) lookup time, while map is usually a binary tree with O(log n) lookups. It doesn't sound like your application would care about the order of items in the map, it's just a simple lookup table.
In both cases, removing the common prefix should be helpful, as it means less time has to be spent needlessly iterating over that part of the strings. For unordered_map it will have to traverse it twice: once to hash and then to compare against the keys in the table. Some hash functions also limit the amount of a string they hash, to prevent O(n) hash performance -- if the common prefix is longer than this limit, you'll end up with a worst-case hash table (everything is in one bucket).
I really like Galik's suggestion of using inodes if you can, but if not...
Will emphasise a point already made in comments: if you've reason to care, always implement the alternatives and measure. The more reason, the more effort it's worth expending on that....
So /- another option is to use a 128-bit cryptographic strength hash function on your filepaths, then trust that statistically it's extremely unlikely to produce a collision. A rule of thumb is that if you have 2^n distinct keys, you want significantly more than a 2n-bit hash. For ~100m keys, that's about 2*27 bits, so you could probably get away with a 64 bit hash but it's a little too close for comfort and headroom if the number of images grows over the years. Then use a vector to back a hash table of just the hashes and file sizes, with say quadratic probing. Your caller would ideally pre-calculate the hash of an incoming file path in a different thread, passing your lookup API only the hash.
The above avoids the dynamic memory allocation, indirection, and of course memory usage when storing variable-length strings in the hash table and utilises the cache much better. Relying on hashes not colliding may make you uncomfortable, but past a point the odds of a meteor destroying the computer, or lightning frying it, will be higher than the odds of a collision in the hash space (i.e. before mapping to hash table bucket), so there's really no point fixating on that. Cryptographic hashing is relatively slow, hence the suggestion to let clients do it in other threads.
(I have worked with a proprietary distributed database based on exactly this principle for path-like keys.)
Aside: beware Visual C++'s string hashing - they pick 10 characters spaced along your string to incorporate in the hash value, which would be extremely collision prone for you, especially if several of those were taken from the common prefix. The C++ Standard leaves implementations the freedom to provide whatever hashes they like, so re-measure such things if you ever need to port your system.

Tree or other data structure most efficient to lookup "recent searches"

I thought there exists a tree algorithm for what I'm now looking for, but I forgot about it's name and Googling didn't help there.
I'm searching for an algortithm that has the very best lookup performance for a data. Characteristics:
- Each lookup is expected to be a hit. So all keys which are looked up exist (there may be some misses, but these will be treated as a "misconfiguration", and the occurrence of such misses is negligible)
- It is very likely (the data set is optimized for this) that same lookups occur subsequently - e.g. there are likely to be a million lookups for key 123, there may be a single lookup for key 456 in between, and then again millions of lookups for 123. Then later a next group with likely same keys are looked up, and so on
Sure I could use a hash algorithm. But for the given purpose I remember that there was a search optimized tree, which optimizes lookups in such way that most recent lookups are at the very top of the tree. so potentially you'd have the first node of the tree directly a hit O(1), without needing a hash function or modulo of an hash store.
I'm seeking this algorithm to achieve raw performance for graphics rendering on mobilde devices.
Perhaps a splay tree.
A splay tree is a self-adjusting binary search tree with the additional property that recently accessed elements are quick to access again.
But a hash table would be expected O(1), so you shouldn't expect the one to clearly outperform the other.
I would suggest using a hash table for the job. To speed up subsequent searches, you can cache the K most recently accessed, different elements in an array. If K is small (< 20 or so), linear search in that array will be very fast, because it can stay in the L1 cache.

Fastest way to search for a string

I have 300 strings to be stored and searched for and that most of them are identical in terms of characters and lenght. For Example i have string "ABC1","ABC2","ABC3" and so on. and another set like sample1,sample2,sample3. So i am kinda confused as of how to store them like to use an array or a hash table. My main concern is the time i take to search for a string when i need to get one out from the storage. If i use an array i will have to do a string compare on all the index for me to arrive at one. Now if i go and impliment a hash table i will have to take care of collisions(obvious) and that i will have to impliment chaining for storing identical strings.
So i am kind of looking for some suggestions weighing the pros and cons of each and arrive at the best practice
Because the keys are short tend to have a common prefix you should consider radix data structures such as the Patricia trie and Ternary Search Tree (google these, you'll find lots of examples) Time for searching these structures tends to be O(1) with respect to # of entries and O(n) with respect to length of the keys. Beware, however that long strings can use lots of memory.
Search time is similar to hash maps if you don't consider collision resolution which is not a problem in a radix search. Note that I am considering the time to compute the hash as part of the cost of a hash map. People tend to forget it.
One downside is radix structures are not cache-friendly if your keys tend to show up in random order. As someone mentioned, if the search time is really important: measure the performance of some alternative approaches.
This depends on how much your data is changing. With that I mean, if you have 300 index strings which are referencing to another string, how often does those 300 index strings change?
You can use a std::map for quick lookups, but the map will require more resource when it is created the first time (compared to a array, vector or list).
I use maps mostly for some kind of dynamic lookup tables (for example: ip to socket).
So in your case it will look like this:
std::map<std::string, std::string> my_map;
my_map["ABC1"] = "sample1";
my_map["ABC2"] = "sample2";
std::string looked_up = my_map["ABC1"];

How can we benefit from vs2010 hash_map's less?

See this if you don't know vs2010 actually requires total ordering, and hence it require a user defined less.
one of answer said it is possible for binary search, but I don't think so, this because
The hash function should be uniform, and it is better that load factor less than 1, it means, in most case, one element per hash slot. i.e. no need binary search.
Obviously, it will slow down insertion because of locating the appropriate position.
How does hash-map benefit from this design? and how do we utilize this design?
thanks
The hash function should be uniform, and it is better that load factor less than 1, it means, in most case, one element per hash slot. i.e. no need binary search.
There won't be at most one element per hash slot. Some buckets will have to keep more than one key. Unless the input is only from a pre-determined restricted set of values (i.e. perfect hashing), the hash function will have to deal with more inputs than the outputs that it can produce. There will be collisions; this is unavoidable in an implementation as generic as this one. However, good hash functions should produce well-distributed hashes and that makes the number of elements per hash slot stay low.
Obviously, it will slow down insertion because of locating the appropriate position.
Assuming a good hash function and non-degenerate input (input designed so that many elements result in the same hash), there will always be only a few keys per bucket. Inserting into such a binary search tree won't be that big of a cost, and that little cost may bring benefits elsewhere (searches may be faster than on an implementation with a linked list). And in case of degenerate input, the hash map will degenerate into a binary search tree, which is much better than a simple linked list.
Your question is largely irrelevant in practice, because C++ now supplies unordered_map etc. which use an Equal predicate rather than a less-than comparator.
However, consider a hash_map<string, ...>. Clearly, the value space of string is larger than that of size_t, so for any hash function there will be values that have the same hash and so are placed in the same bucket. In the pathological situation where all the items in the hash table are placed in the same bucket, exploiting ordering among keys will result in improved speed of access, insertion and removal.
Note that search on an ordered list (or binary tree) is O(log n) as opposed to O(n).

data structure for storing array of strings in a memory

I'm considering of data structure for storing a large array of strings in a memory. Strings will be inserted at the beginning of the programm and will not be added or deleted while programm is running. The crucial point is that search procedure should be as fast as it can be. Saving of memory is not important. I incline to standard structure hash_set from standard library, that allows to search elements in the structure with about constant time. But it's not guaranteed that this time will be short. Will anyone suggest a better standard desicion?
Many thanks!
Try a Prefix Tree
A Trie is better than a Binary Search Tree for searching elements. Compared against a hash table, you could see this question
If lookup time really is the only important thing, then at startup time, once you have all the strings, you could compute a perfect hash over them, and use this as the hashing function for a hashtable.
The problem is how you'd execute the hash - any kind of byte-code-based computation is probably going to be slower than using a fixed hash and dealing with collisions. But if all you care about is lookup speed, then you can require that your process has the necessary privileges to load and execute code. Write the code for the perfect hash, run it through a compiler, load it. Test at runtime whether it's actually faster for these strings than your best known data-agnostic structure (which might be a Trie, a hashtable, a Judy array or a splay tree, depending on implementation details and your typical access patterns), and if not fall back to that. Slow setup, fast lookup.
It's almost never truly the case that speed is the only crucial point.
There is e.g. google-sparsehash.
It includes a dense hash set/map (re)implementation that may perform better than the standard library hash set/map.
See performance. Make sure that you are using a good hash function. (My subjective vote: murmur2.)
Strings will be inserted at the
beginning of the programm and will not
be added or deleted while programm is running.
If the strings are immutable - so insertion/deletion is "infrequent", so to speak -, another option is to build a Directed Acyclic Word Graph or a Compact Directed Acyclic Word Graph that might* be faster than a hash table and has a better worst case guarantee.
**Standard disclaimer applies: depending on the use case, implementations, data set, phase of the moon, etc. Theoretical expectations may differ from observed results because of factors not accounted for (e.g. cache and memory latency, time complexity of certain machine instructions, etc.).*
A hash_set with a suitable number of buckets would be ideal, alternatively a vector with the strings in dictionary order, searched used binary search, would be great too.
The two standard data structures for fast string lookup are hash tables and tries, particularly Patricia tries. A good hash implementation and a good trie implementation should give similar performance, as long as the hash implementation is good enough to limit the number of collisions. Since you never modify the set of strings, you could try to build a perfect hash. If performance is more important than development time, try all solutions and benchmark them.
A complementary technique that could save lookups in the string table is to use atoms: each time you read a string that you know you're going to look up in the table, look it up immediately, and store a pointer to it (or an index in the data structure) instead of storing the string. That way, testing the equality of two strings is a simple pointer or integer equality (and you also save memory by storing each string once).
Your best bet would be as follows:
Building your structure:
Insert all your strings (char*s) into an array.
Sort the array lexicographically.
Lookup
Use a binary search on your array.
This maintains cache locality, allows for efficient lookup (Will search in a space of ~4 billion strings with 32 comparisons), and is dead simple to implement. There's no need to get fancy with tries, because they are complicated, and slower than they appear (especially if you have long strings).
Random sidenote: Combined with http://blogs.msdn.com/b/oldnewthing/archive/2005/05/19/420038.aspx, you'll be unstoppable!
Well, assuming you truly want an array and not an associative contaner as you've mentioned, the allocation strategy mentioned in Raymond Chen's Blog would be efficient.