How about storing the array indices in a map - c++

my program is using boost::unordered_map a lot, and the map has about 40 million entries. This program doesn't do insertion or deletion very often. It just randomly accesses entries using keys.
I'm wondering will it improve the performance (in terms of the speed of accessing entries) if I store my entry values (about 1 KB each) in a flat array (maybe an std::vector), and I use boost::unordered_map to store the mapping of keys to the indices to this array.
Thanks,
Cui

Yes, that could seriously speed up things. In fact, that's what Boost flat_map is for :)
The docs relate: Non-standard containers
Using sorted vectors instead of tree-based associative containers is a well-known technique in C++ world. Matt Austern's classic article Why You Shouldn't Use set, and What You Should Use Instead (C++ Report 12:4, April 2000, PDF) was enlightening:
...
This gives you more than you asked for because you don't even need the extraneous index. This gives you more locality of reference and a lower memory footprint. Most importantly, it gives you lower complexity (-> fewer bugs) and a drop-in replacement for std::[unordered_]map in terms of interface.

Storing the values in contiguous memory like std::vector provides, will increase cache locality. This can make a pretty big difference in performance but it depends on the access pattern.
If your hunting performance, remember the golden rule:
always measure!

Related

Small `vector`-like structure with no capacity distinct from size

I have a program where I have very large number (hundreds of thousands, even millions) of fairly small vectors (~95% of these vectors contain 2-6 ints). Once created, their size changes very rarely, but cannot be known at compile time. I have no use for extra capacity beyond the size of the structure, and I'd like to do away with the overhead of storing the capacity separately. This would entail reallocation of storage for any operation that changes the size of the structure, a cost I'm willing to pay.
Does a library providing a more or less drop-in replacement for vector with these characteristics exist?
Folly "is a library of C++14 components designed with practicality and efficiency in mind. Folly contains a variety of core library components used extensively at Facebook"
Folly smallvec is an optimized implementation of a small vector.
Give it a look.
Are you creating each vector individually? If so have you tried using arrays instead? You could do a quick size check(how many ints you're putting in the array) before creating, that way you don't have to have extra capacity. Also, 김선달's comment above sounds like what you're looking for.

Deciding when to use a hash table

I was soving a competitive programming problem with the following requirements:
I had to maintain a list of unqiue 2d points (x,y), the number of unique points would be less than 500.
My idea was to store them in a hash table (C++ unordered set to be specific) and each time a node turned up i would lookup the table and if the node is not already there i would insert it.
I also know for a fact that i wouldn't be doing more than 500 lookups.
So i saw some solutions simply searching through an array (unsorted) and checking if the node was already there before inserting.
My question is is there any reasonable way to guess when should i use a hash table over a manual search over keys without having to benchmark them?
My question is is there any reasonable way to guess when should i use a hash table over a manual search over keys without having to benchmark them?
I am guessing you are familiar with basic algorithmics & time complexity and C++ standard containers and know that with luck hash table access is O(1)
If the hash table code (or some balanced tree code, e.g. using std::map - assuming there is an easy order on keys) is more readable, I would prefer it for that readability reason alone.
Otherwise, you might make some guess taking into account the approximate timing for various operations on a PC. BTW, the entire http:///norvig.com/21-days.html page is worth reading.
Basically, memory accesses are much more slow than everything else in the CPU. The CPU cache is extremely important. A typical memory access with cache fault requiring fetching data from DRAM modules is several hundreds times slower than some elementary arithmetic operation or machine instruction (e.g. adding two integers in registers).
In practice, it does not matter that much, as long as your data is tiny (e.g. less than a thousand elements), since in that case it is likely to sit in L2 cache.
Searching (linearly) in an array is really fast (since very cache friendly), up to several thousand of (small) elements.
IIRC, Herb Sutter mentions in some video that even inserting an element inside a vector is practically -but unintuitively- faster (taking into account the time needed to move slices) than inserting it into some balanced tree (or perhaps some other container, e.g. an hash table), up to a container size of several thousand small elements. This is on typical tablet, desktop or server microprocessor with a multimegabyte cache. YMMV.
If you really care that much, you cannot avoid benchmarking.
Notice that 500 pairs of integers is probably fitting into the L1 cache!
My rule of thumb is to assume the processor can deal with 10^9 operations per second.
In your case there are only 500 entries. An algorithm up to O(N^2) could be safe. By using contiguous data structure like vector you can leverage the fast cache hit. Also hash function sometimes can be costly in terms of constant. However if you have a data size of 10^6, the safe complexity might be only O(N) in total. In this case you might need to consider O(1) hashmap for a single lookup.
You can use Big O Complexity to roughly estimate the performance. For the Hash Table, Searching an element is between O(1) and O(n) in the worst case. That means, that in the best case your access time is independant of the number of elements in your map but in the worst case it is linear dependant on the size of your hash table.
A Binary tree has a guaranteed search complexity of O(nlog(n)). That means, that searching an element always depends on the size of the array, but in the Worst Case its faster than a hash table.
You can look up some Big O Complexities at this handy website here: http://bigocheatsheet.com/

Map vs Unordered_map-- Multithreading

I have the following requirements:
I need a datastructure with key,value pairs(keys are integers if that helps).
I need the below operations:-
Iteration(most used)
Insertion (2nd most used)
Searching by key and deletion(least)
I plan to use multiple locks over the structure for concurrent access.
What is the ideal data structure to use?
Map or an unordered map?
I think unordered map makes sense, because i can insert in O(1), delete in O(1). But i am not sure of the iteration. How bad is the performance when compared to map?
Also, i plan to use multiple locks on blocks instead of the whole structure. Any good implementation example of this?
Thanks
The speed of iterator incrementing is O(1) for both containers, although you might get somewhat better cache locality from std::unordered_map.
Apart from the slower O(log N) find/insert/erase functionality of std::map, one other difference is that std::map provides bidirectional iterators, whereas the faster (amortized O(1) element access) std::unordered_map only provides forward iterators.
The excellent book C++ Concurrency in Action: Practical Multithreading by Anthony Williams provides a code example of a multithreaded unordered_map with a lock per entry. This book is highly recommended if you are doing serious multithreaded coding.
Iteration is not a problem in an unordered_map. It is a little less efficient than a vector, but not largely so.
As always, you will need to benchmark for YOUR use-cases, and compare with other container types if it's a critical part of your code.
Not sure what you mean by "multiple locks on blocks instead of the whole structure" - any container updates will need to be locked for the whole container...
Why not simply use an existing concurrent_unordered_map which you can find in both TBB and Concrt.
Have you thought about trying a std::deque The reasoning being as follows:
Iteration is fast - data should be more or less packed close (unlike lists)
Insertion (at either end) should be quick - data in a deque is never resized
Iteration and deletion slow (but uncommon usecase).
If the last two cases are common, a std::list may be used. Also consider testing a std::vector` since it is more cache efficient.
Iteration in an unordered_map may be slow due to iterating over a large number of unused elements in the hashtable. Insertions will be quich until collision levels become intolerable at which point the whole data structure will need to laid out again.
maps have relatively fast iteration except that data elements may be far apart. Insertion can be slow due to the re-balancing of the red-black trees that this requires.
The main usecase for unordered_maps is for fast lookup (O1). normal map have fastish lookup (O log n) but much better iteration performance.
If you have hard real-time requirements, I would recommend map over unordered_map. std::map has guaranteed performance 100% of the time, but the std::unordered_map may do a rehash and completely ruin real-time performance in some critical corner case. In general, I prefer red-black trees (std::map) over hashtables (std::unordered_map) if I need absolute guarantees on worst-case performance.

Alternative to vectors for large data sets? C++

I am looking for a data structure that holds data in order that its inserted (like a vector) that needs to hold millions of unsigned longs. The key is that it needs to have a lookup thats better than O(logn), because it will get searched against a similar vector of the same size. Is there something that exists like this?
If I insert 10, 20, 30 and then iterate over the set, I need to guarantee the order of 10, 20, 30. My data is a string I converted into a unsigned long to reduce memory use, that is reverse decodable.
EDIT:
Since people are asking, I am comparing two vectors against each other (both very large in size) to get the difference.
Small Example:
vector 1: 10 20 30 40 50 60
vector 2: 11 24 30 40 55 70 90
result: 30 40
I never used it myself and it might be out-of-date compared to recent C++ version features (last update is from 2011), but STXXL is meant to be a set of containers and algorithms built for very big amount of data.
It might fit your need.
The core of STXXL is an implementation of the C++ standard template
library STL for external memory (out-of-core) computations, i. e.,
STXXL implements containers and algorithms that can process huge
volumes of data that only fit on disks. While the closeness to the STL
supports ease of use and compatibility with existing applications,
another design priority is high performance.
The key features of STXXL are:
Transparent support of parallel disks. The library provides implementations of basic parallel disk algorithms. STXXL is the only
external memory algorithm library supporting parallel disks.
The library is able to handle problems of very large size (tested to up to dozens of terabytes).
Improved utilization of computer resources. STXXL implementations of external memory algorithms and data structures benefit from
overlapping of I/O and computation.
Small constant factors in I/O volume. A unique library feature called "pipelining" can save more than half the number of I/Os, by
streaming data between algorithmic components, instead of temporarily
storing them on disk. A development branch supports asynchronous
execution of the algorithmic components, enabling high-level task
parallelism.
Shorter development times due to well known STL-compatible interfaces for external memory algorithms and data structures.
STL algorithms can be directly applied to STXXL containers; moreover, the I/O complexity of the algorithms remains optimal in most
of the cases.
For internal computation, parallel algorithms from the MCSTL or the
libstdc++ parallel mode are optionally utilized, making the algorithms
inherently benefit from multi-core parallelism.
A hash map is one way you will have faster lookup than a sorted vector. You must have c++11 support to use it.
http://www.cplusplus.com/reference/unordered_map/unordered_map/
To preserve the order of the data the only way would be to maintain a vector beside it that stored the int's as well
Before you jump to using it you should consider how you are going to use this data structure (access pattern). Also consider what the data you will be getting is likely to be.
Here is boost's version of the same thing http://www.boost.org/doc/libs/1_53_0/doc/html/unordered.html
I think what you should use is unordered_map combined with maybe a doubly-linked list for the order.
So every time you add a new item to your database you add it first to the front (or the end) of the linked list, and then add it to the hashmap where the key is the value (the unsigned int) and the "value" (from the key/value pair) is the pointer to the the object in the linked list.
So now if you want a fast lookup you look in the hashmap, and if you want to iterate by order you use the linked list.
Of course when you want to remove an object you have to remove them from both, but complexity wise it's the same (O(1) amortized for everything).
This will of course increase your memory by 2 or 3 compared to just using a hashmap.

std::map vs. self-written std::vector based dictionary

I'm building a content storage system for my game engine and I'm looking at possible alternatives for storing the data. Since this is a game, it's obvious that performance is important. Especially considering various entities in the engine will be requesting resources from the data structures of the content manager upon their creation. I'd like to be able to search resources by a name instead of an index number, so a dictionary of some sort would be appropriate.
What are the pros and cons to using an std::map and to creating my own dictionary class based on std::vector? Are there any speed differences (if so, where will performance take a hit? I.e. appending vs. accessing) and is there any point in taking the time to writing my own class?
For some background on what needs to happen:
Writing to the data structures occurs only at one time, when the engine loads. So no writing actually occurs during gameplay. When the engine exits, these data structures are to be cleaned up. Reading from them can occur at any time, whenever an entity is created or a map is swapped. There can be as little as one entity being created at a time, or as many as 20, each needing a variable number of resources. Resource size can also vary depending on the size of the file being read in at the start of the engine, images being the smallest and music being the largest depending on the format (.ogg or .midi).
Map: std::map has guaranteed logarithmic lookup complexity. It's usually implemented by experts and will be of high quality (e.g. exception safety). You can use custom allocators for custom memory requirements.
Your solution: It'll be written by you. A vector is for contiguous storage with random access by position, so how will you implement lookup by value? Can you do it with guaranteed logarithmic complexity or better? Do you have specific memory requirements? Are you sure you can implement a the lookup algorithm correctly and efficiently?
3rd option: If you key type is string (or something that's expensive to compare), do also consider std::unordered_map, which has constant-time lookup by value in typical situations (but not quite guaranteed).
If you want the speed guarantee of std::map as well as the low memory usage of std::vector you could put your data in a std::vector, std::sort it and then use std::lower_bound to find the elements.
std::map is written with performance in mind anyway, whilst it does have some overhead as they have attempted to generalize to all circumstances, it will probably end up more efficient than your own implementation anyway. It uses a red-black binary tree, giving all of it's operations O[log n] efficiency (aside from copying and iterating for obvious reasons).
How often will you be reading/writing to the map, and how long will each element be in it? Also, you have to consider how often will you need to resize etc. Each of these questions is crucial to choosing the correct data structure for your implementation.
Overall, one of the std functions will probably be what you want, unless you need functionality which is not in a single one of them, or if you have an idea which could improve on their time complexities.
EDIT: Based on your update, I would agree with Kerrek SB that if you're using C++0x, then std::unordered_map would be a good data structure to use in this case. However, bear in mind that your performance can degrade to linear time complexity if you have conflicting hashes (this cannot happen with std::map), as it will store the two pair's in the same bucket. Whilst this is rare, the probability of it obviously increases with the number of elements. So if you're writing a huge game, it's possible that std::unordered_map could become less optimal than std::map. Just a consideration. :)