How to keep track of visited points in C++ - c++

I am doing a problem in c++ that has to keep track of points that are visited in a traversal. The point is basically,
struct Point {
int x;
int y;
};
My first thought to solving something like this would be to use something like
std::set<Point> visited_points;
or maybe
std::map<Point, bool> visited_points;
However, I am a beginner in c++, and I realized you have to implement a Compare, which I didn't know how to do. When I asked, I was told said that using a map was "overkill" in a problem like this. He said the better solution was to do something like
std::vector<std::vector<bool>> visited_points;
He said std::map was not the best solution, since using a vector was faster.
I'm wondering why using a double vector is better in terms of style and performance. Is it because implementing a Compare is hard for a Point? A double vector feels hacky to me, and I also think it looks uglier than using a set or map. Is it really the best way to approach this problem, or is there a better solution I don't know about?

If someone asks you, in abstract, "What is the best way of keeping track of objects I've visited?", then you would be forgiven for replying "Use an std::unordered_set<Object>" (usually called a hash table for languages other than C++). That's a nice simple answer and it is often correct if you don't know anything at all about the objects. After all, a hash lookup is (expected) O(1), and in practice is usually quite fast.
There are a few caveats, the biggest one being that you will need to be able to compute a hash for each object. The C++ standard library does not (yet) come with a framework for computing hashes of arbitrary objects, not even PODs, and rendering an object as a string in order to be able to take advantage of std::hash<std::basic_string> is usually way too much work (unless the object is already a string, of course).
If you can't figure out how to write a hash function for you object, you might then think about using an ordered associative container (aka a balanced BST). However, that is not a good idea. Not because it is difficult to write a comparison function. Writing comparison functions is usually trivial, particularly for PODs; you can leverage the fact that std::tuple implements a comparison function for every tuple whose element types are all comparable.
The real problem with ordered associative containers is that they are high overhead. Element access is slow: O(log n), not O(1), and the constant is not small either. And the bookkeeping data required to maintain the balanced tree is much larger than the two-pointer hash-table node (and even that is quite big for small objects). So ordered associative containers really only make sense if you need to be able to traverse them in order. Generally, "visited" maps don't need to be traversed at all -- they are just used for lookup.
Both ordered and unordered containers have another problem: the objects in the container are individual dynamic memory allocations (the API requires that references to the objects in the container must be stable), so over time the individual objects end up getting scattered across dynamic memory, leading to a lot of cache misses.
But, really, even before you start thinking about how easy (or difficult) it will be to hash your objects in order to keep them in a hash-set, you should think about the nature of the objects you are tracking. In particular, can they be easily indexed with a small(-ish) integer? If so, you could just use a vector of bits, one bit per possible object. That's an efficient representation, both for access speed (definitely O(1)) and for space, and it is optimal for memory caching.
If your objects are easily numbered then bit-vectors will be an attractive alternative. One bit per object is (literally) two orders of magnitude less space than a hash-map, so unless you expect your visited map to be extremely sparse (rarely the case in algorithms which need a visited map), it's going to be a big win.
In the case of your problem, which I gather has to do with keeping track of points visited in a rectangular array such as a gameboard or an image, it is clear that the bit vector approach is going to work out well. It's true that you require two levels of indexing (unless you reduce the two indices into a single integer, which is quite easy if you know the dimensions), but that doesn't add much overhead.
Although there are doubts about how good an idea it was, the C++ standard library special cases std::vector<bool> to really be a bit vector. That makes it impossible to create a native pointer to a single element of the vector (which is why many people consider std::vector<bool> to be a hack), and creates some other odd issues when you try to use it as a vector. But if all you want is a bitmask -- as in the case of a visited map -- then it is a pretty good solution.
C++ also offers real bit vectors -- std::bitset -- but unfortunately these need to have their size known at compile time. Boost offers dynamic_bitset, which is a kind of std::vector<bool> written with hindsight, so it's also worth looking at.

Related

Memory efficient std::map alternative

I'm using a std::map to store about 20 million entries. If they were stored without any container overhead, it would take approximately 650MB of memory. However, since they are stored using std::map, it uses up about 15GB of memory (i.e. too much).
The reason I am using an std::map is because I need to find keys that are equal to/larger/smaller than x. This is why something like sparsehash wouldn't work (since, using that, I cannot find keys by comparison).
Is there an alternative to using std::map (or ordered maps in general) that would result in less memory usage?
EDIT: Writing performance is much more important than reading performance. It will probably only read ~10 entries, but I don't know which entries it will read.
One alternative would be to use flat_map from Boost.Containers: that supports the same interface as std::map, but is backed by a sorted contiguous array (think std::vector) instead of a tree. Or hand-roll your own solution based on the same idea.
Its performance characteristic is of course different, due to the different back-end. It's up to you to evaluate whether it's usable in your case.
Are you writing on-the-fly or one time before the lookup is done? If the later is the case, you shouldn't need a map, you could use std::vector and one-time sort.
You could just insert everything unsorted to the vector, sort one-time after everything is there (O(N * log N) as well as std::map, but much better performance characteristics) and then lookup in the sorted array (O(logN) as the std::map).
And especially if you know the number of elements before reading and could reserve the vector size upfront, that could work pretty well. Or at least if you know some "upper bound" to reserve perhaps slightly more than actually needed but avoid the reallocations.
Given your requirements:
Insertion needs to be quick
There are many elements to read
Read-back can be slow
You only read back data once
I'd consider typedef std::pair<uint64, thirty_six_byte_struct> element; and populate a std::list<element>. That will be hard to beat in terms of performance.
For reading back, I'd simply traverse the linked list, checking at every point if you need one of those elements. That's a O(N) traversal but as you say, you'll only do that once.
Turns out the issue wasn't std::map.
I realized was using 3 separate maps to represent various parts of the same data, and after slimming it down to 1, the difference in memory was entirely negligible.
Looking at the code a little more, I realized code I had written to free a really expensive struct (per element of the map) didn't actually work.
Fixing that part, it now uses <1GB of memory, as it should! :)
TL;DR: std::map's overhead is entirely negligible for this. The issue was my own.

Fast data structure that supports finding the minimum element and accessing, inserting, removing and updating data at any index

I'm looking for ideas to implement a templatized sequence container data structure which can beat the performance of std::vector in as many features as possible and potentially perform much faster. It should support the following:
Finding the minimum element (and returning it's index)
Insertion at any index
Removal at any index
Accessing and updating any element by index (via operator[])
What would be some good ways to implement such a structure in C++?
You generally be pretty sure that the STL implementations of all containers tend to be very good at the range of tasks they were designed for. That is to say, you're unlikely to be able to build a container that is as robust as std::vector and quicker for all applications. However, generally speaking, it is almost always possible to beat a generic tool when optimizing for a specific application.
First, let's think about what a vector actually is. You can think of it as a pointer to a c-style array, except that its elements are stored on the heap. Unlike a c array, it also provides a bunch of methods that make it a little bit more convenient to manipulate. But like a c-array, all of it's data is stored contiguously in memory, so lookups are extremely cheap, but changing its size may require the entire array to be shifted elsewhere in memory to make room for the new elements.
Here are some ideas for how you could do each of the things you're asking for better than a vanilla std::vector:
Finding the minimum element: Search is typically O(N) for many containers, and certainly for a vector (because you need to iterate through all elements to find the lowest). You can make it O(1), or very close to free, by simply keeping the smallest element at all times, and only updating it when the container is changed.
Insertion at any index: If your elements are small and there are not many, I wouldn't bother tinkering here, just do what the vector does and keep elements contiguously next to each other to keep lookups quick. If you have large elements, store pointers to the elements instead of the elements themselves (boost's stable vector will do this for you). Keep in mind that this make lookup more expensive, because you now need to dereference the pointer, so whether you want to do this will depend on your application. If you know the number of elements you are going to insert, std::vector provides the reserve method which preallocates some memory for you, but what it doesn't do is allow you to decide how the size of the allocated memory grows. So if your application warrants lots of push_back operations without enough information to intelligently call reserve, you might be able to beat the standard std::vector implementation by tailoring the growth function of your container to your particular needs. Another option is using a linked list (e.g. std::list), which will beat an std::vector in insertions for larger containers. However, the cost here is that lookup (see 4.) will now become vastly slower (O(N) instead of O(1) for vectors), so you're unlikely to want to go down this path unless you plan to do more insertions/erasures than lookups.
Removal at any index: Similar considerations as for 2.
Accessing and updating any element by index (via operator[]): The only way you can beat std::vector in this regard is by making sure your data is in the cache when you try to access it. This is because lookup for a vector is essentially an array lookup, which is really just some pointer arithmetic and a pointer dereference. If you don't access your vector often you might be able to squeeze out a few clock cycles by using a custom allocator (see boost pools) and placing your pool close to the stack pointer.
I stopped writing mainly because there are dozens of ways in which you could approach this problem.
At the end of the day, this is probably more of an exercise in teaching you that the implementation of std::vector is likely to be extremely efficient for most compilers. All of these suggestions are essentially micro-optimizations (which are the root of all evil), so please don't blindly apply these in important code, as they're highly likely to end up costing you a lot of time and headache.
However, that's not to say you shouldn't tinker and learn for yourself, so by all means go ahead and try to beat it for your application and let us know how you go! Good luck :)

What is a good data structure with fast insertions+deletions and partial reversing?

I am writing an algorithm in C++, and for that I need a data structure to store a sequence of n data points. That is, I want to store elements v[0],v[1],...,v[n-1]. I do care about the order.
The operations I would like to be fast are:
Sequential access (i.e. access to v[0], then v[1] and so on with the ability to edit the values);
Point relocation, i.e.
{v[0],v[1],...,v[i],v[i+1],...,v[j-1],v[j],v[j+1],...,v[n-1]} -> {v[0],v[1],...,v[i],v[j],v[i+1],...,v[j-1],v[j+1],...,v[n-1]};
Partial reversion, i.e.
{v[0],...,v[i],v[i+1],...,v[j-1],v[j],v[j+1],...,v[n-1]} -> {v[0],...,v[i],v[j],v[j-1],...,v[i+1],v[j+1],...,v[n-1]}.
It seems, that I can implement my algorithm using a XOR-linked list, and it will give the smallest complexity (operations above will be O(1), giving O(n^2) for my algorithm). But I know, that XOR-linked list is considered to be "an ugly data structure" ([1], [2]).
What is a good data structure for this than? More precisely, is there any other commonly used data structure, implementing these operations in O(1) time?
It depends on a lot more factors that you have not mentioned.
First, it depends on the size of your elements and their copying code. If you have small elements (about 64 bytes or less) and that their copying (or moving) is cheap (or even, trivial (in the POD-type sense)), then using std::vector is likely to be a very good choice, even with the "worse" time complexities (btw, as a tip for the future, don't be too hung up on the pursuit of minimal time complexity, it's only one part of the whole story).
This is especially true if your elements are trivial, because the rotate operation in a vector is going to be very fast (although still O(n)), while the other operations are the same in terms of time complexity compared to a linked-list.
Second, it depends on how often you do the different operations that you mentioned. Sequential access through a linked-list is really inefficient, and they are generally not recommended if traversals are something you do often.
If your elements are so small that you consider using a XOR-linked-list (to spare one pointer per element), then it's likely that you shouldn't be using a linked-list at all. And to be honest, I'm not sure the XOR-linked-list is ever really worth it (but I don't want to get into that).
Overall, I would have to say that you should test these three options: linked-list (either std::forward_list or std::list), std::vector, or a vector of pointers (e.g. std::vector< std::unique_ptr<T> >). Another choice might be an unrolled linked-list, but that will only be good within a specific regime of the factors that I mentioned.
And I reiterate, I said test them, because that is really the only way you will know what is best. "Analysis" is only for rules of thumb, no more, as per one of my favorite quotes:
As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain, they do not refer to reality. (Albert Einstein)
I would just use a std::vector<T>, then only look for a potentially more efficient data structure if you find the operations are too expensive in practice.
Sequential access of std::vector<T> is extremely efficient.
The std::rotate algorithm will work for operation 2.
The std::reverse algorithm will work for operation 3.
The mentioned algorithms are in the <algorithm> header.

Are there versions of the C++ STL's associative data structures optimized for numerous partial copies?

I have a large tree that grows as my algorithm progresses. Each node contains set, which I suppose is implemented as balanced binary search tree. Each node's set shall remain fixed after that node's creation, before its use in creating that node's children.
I fear however that copying each set is prohibitively expensive. Instead, I would prefer that each newly created node's set utilize all appropriate portions of the parent node's set. In short, I'm happy copying O(log n) of the set but not O(n).
Are there any variants of the STL's associative data structures that offer such an partial copy optimization? Perhaps in Boost? Such a data structure would be trivial to implement in Haskell or OCaML of course, but it'd require more effort in C++.
I know it's not generally productive to suggest a different language, but Haskell's standard container libraries do exactly this. I remember seeing a video (was it Simon Peyton Jones?) talking about this exact problem, and how a Haskell solution ended up being much faster than a C++ solution for the given programmer effort. Of course, this was for a specific problem that had a lot of sets with a lot of shared elements.
There is a fair amount of research into this subject. If you are looking for keywords, I suggest searching for "functional data structures" instead of "immutable data structures", since most functional paradigms benefit from immutability in general. Structures such as finger tree were developed to solve exactly this problem.
I know of no C++ library that implements these data structures. There is nothing stopping you from reading the relevant papers (or the Haskell source code, which is about 1k lines for Data.Set including tests) and implementing it yourself, but I know that is not what you'd want to hear. You'd also need do some kind of reference counting for the shared nodes, which for such deep structures can have a higher overhead than even simple garbage collectors.
It's practically impossible in C++, since the notion of an immutable
container doesn't exist. You may know that you'll be making no changes,
and that some sort of shared representation would be preferable, but the
compiler and the library don't, and there's no way of communicating this
to them.
Each node contains set, which I suppose is implemented as balanced
binary search tree. Each node's set shall remain fixed after that
node's creation, before its use in creating that node's children.
That's a pretty unique case. I would recommend using std::vector instead. (No really!) The code is creating the node can still use a set, and switching to a vector at the last second. However, the vector is smaller, and takes only a tiny number of memory allocations (one if you use reserve), making the algorithm much faster.
typedef unsigned int treekeytype;
typedef std::vector<unsigned int> minortreetype;
typedef std::pair<treekeytype, minortreetype> majornode
typedef std::set<treekeytype, minortreetype> majortype;
majortype majortree;
void func(majortype::iterator perform) {
std::set<unsigned int> results;
results.assign(perform->second.begin(), perform->second.end());
majortree[perform->first+1].assign(results.begin(), results.end()); //the only change is here
majortype::iterator next = majortree.find(perform->first+1);
func(next);
}
You can even use std::lower_bound and std::upper_bound to still get O(log(n)) memory accesses since it's still sorted the same as the set was, so you won't lose any efficiency. It's pure gain as long as you don't need to insert/remove frequently.
I fear however that copying each set is prohibitively expensive.
If this fear is caused because each set contains mostly the same nodes as it's parents, and the data is costly (to copy or in memory, whichever), with only a few nodes changed, make the subtrees contain std::shared_pointers instead of the data themselves. This means the data itself will not get copied, only the pointers.
I realize this isn't what you were aiming at with the question, but as JamesKanze said, I know of no such container. Other than possibly a bizarre and dangerous use of the STL's rope class. Note that I said and meant STL, not the standard C++ library. They're different.

When to choose std::vector over std::map for key-value data?

Considering the positive effect of caching and data locality when searching in primary memory, I tend to use std::vector<> with std::pair<>-like key-value items and perform linear searches for both, if I know that the total amount of key-value items will never be "too large" to severely impact performance.
Lately I've been in lots of situations where I know beforehand that I will have huge amounts of key-value items and have therefore opted for std::map<> from the beginning.
I'd like to know how you make your decisions for the proper container in situations like the ones described above.
Do you
always use std::vector<> (or similar)?
always use std::map<> (or similar)?
have a gut feeling for where in the item-count range one is preferable over the other?
something entirely different?
Thanks!
I only rarely use std::vector with a linear search (except in conjunction with binary searching as described below). I suppose for a small enough amount of data it would be better, but with that little data it's unlikely that anything is going to provide a huge advantage.
Depending on usage pattern, a binary search on an std::vector can make sense though. A std::map works well when you need to update the data regularly during use. In quite a few cases, however, you load up some data and then you use the data -- but after you've loaded the data, it mostly remains static (i.e., it changes very little, if at all).
In this case, it can make a lot of sense to load the data into a vector, sort it if necessary, and then do binary searches on the data (e.g. std::lower_bound, std::equal_range). This gives pretty much the best of both worlds -- low-complexity binary searches and good cache usage from high locality of reference (i.e., the vector is contiguous, as opposed to the linked structure of a std::map). The shortcoming, of course, is that insertions and deletions are slow -- but this is one time I have used your original idea -- store newly inserted data separately until it reaches some limit, and only then sort it in with the rest of the data, so a single search consists of a binary search of the main body of the data, followed by a linear search of the (small amount) of newly inserted data.
I would never make the choice solely on (possibly bogus) "efficiency" grounds, but always on what I am actually going to do with the container. Do I want to store duplicates? Is insertion order important? Will I sometimes want to search for the value not the key? Those kind of things.
Have you considered using sorted data structures? They tend to offer logarithmic searches and inserts - a reasonable trade-off. Personally I don't have any hard and fast rules other than liking maps for the ability to key on a human-readable/understandable value.
Of course there's plenty of discussion as well on the efficiency of maps vs. lists/vectors (sorted and unsorted) - if your key is a string that's 10,000 characters, it can take longer to do a string compare than to search through a list of just a few items, so you want to make sure that you can efficiently compare keys as well.
I almost always prefer to use map (or unordered_map, when a hash container makes more sense) vs. a vector.
That being said, I think your reasoning is backwards. I would tend to use a vector only when there are huge amounts of data, since a vector will be a smaller memory footprint.
With the right kinds of datasets, you can load a vector and then sort it and binary_search it with a smaller footprint and similar performance characteristics to a map, especially if the dataset is stable after load.
Why are you not taking unordered_map into account?