The fastest way to iterate through a collection of objects - c++

First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this
for each object {
for a number of vectors {
do some computations
for each object {
check if vector intersects
}
}
}
As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before.
So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker.
Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

What you are measuring is the difference of overhead to access element from an array and a vector. (as well as their creation/modification etc... depending on the operation you are doing).
EDIT: It will vary depending on the platform/os/library you are using.

It probably depends on the implementation of vector iterators. Some implementations are better than others. (Visual C++ — at least older versions — I'm looking at you.)

I think the time difference I was witnessing was actually due to an error in the pointer handling code. After making a few modifications to make the code more readable the iterations were taking around the time (give or take 1%) regardless of the container. Which makes sense as all the containers have the same access mechanism.
However I did notice the vector runs a bit slower in an OpenMP architecture this is probably due to the overhead in each thread maintaining its own copy of the iterator.

Related

What kind of container is appropriate for a "The Powder Toy" style sandbox?

Basically I am making a game similar to The Powder Toy. The world can have a maximum of 256,000 particles in one given frame. In my old Javascript implementation, I looped through every pixel and it caused severe lag because 256,000 is a lot to go through even when you only have around 20,000 particles active. I decided to have a container full of all the currently active particles instead, but then ran into the problem that querying a container for particles at a specific coordinate is also processor intensive. Therefore I came up at the simple solution that I should keep all particles in a lookup table (2-dimensional array) and also have a heap (array of active particles), and iterate through the heap while using the lookup table as a reference. Anything done to the heap is done to the lookup table, and vice versa. This worked very well in Javascript but now as I need to port my program to C++ I am having trouble with finding a good container to hold the heap. Vectors are very slow to add and remove and I cannot remove easily an object by reference from a vector.
Which container should I use, and if there is a better way to handle particles like those in The Powder Toy, what is it? Thanks in advance, and here is a picture for those not familiar with The Powder Toy.
Notice how each pixel there is a particle, and similar builds run extremely fast on my computer. I wonder how they do it...
Vectors are fine for this kind of problem. They provide contiguous storage, so they allow better usage of cache.
Try to allocate a proper vector capacity beforehand, either via constructor or using std::vector::reserve(). Without this, automatic reallocation of the allocated storage space will be triggered every time your vector's size exceeds the current capacity.
You can also try removing elements from a vector using std::swap() and then std::vector::pop_back(), like this:
std::swap(vect.back(), vect[1]);
vect.pop_back();
instead of:
std::vector::erase()
The complexity of both std::vector::pop_back() and std::swap() is constant, and the complexity of std::vector::erase() is linear.
However if you need to preserve the order of elements, the swap - pop method is of no use.

Create matrix of random numbers in C++ without looping

I need to create a multidimensional matrix of randomly distributed numbers using a Gaussian distribution, and am trying to keep the program as optimized as possible. Currently I am using Boost matrices, but I can't seem to find anything that accomplishes this without manually looping. Ideally, I would like something similar to Python's numpy.random.randn() function, but this must be done in C++. Is there another way to accomplish this that is faster than manually looping?
You're going to have to loop anyway, but you can eliminate the array lookup inside your loop. True N-dimensional array indexing is going to be expensive, so you best option is any library (or written yourself) which also provides you with an underlying linear data store.
You can then loop over the entire n-dimensional array as if it was linear, avoiding many multiplications of the indexes by the dimensions.
Another optimization is to do away with the index altogether, and take a pointer to the first element, then iterate the pointer itself, this does away with a whole variable in the CPU which can give the compiler more space for other things. e.g. if you had 1000 elements in a vector:
vector<int> data;
data.resize(1000);
int *intPtr = &data[0];
int *endPtr = &data[0] + 1000;
while(intPtr != endPtr)
{
(*intPtr) == rand_function();
++intPtr;
}
Here, two tricks have happened. Pre-calculate the end condition outside the loop itself (this avoids a lookup of a function such as vector::size() 1000 times), and working with pointers to the data in memory rather than indexes. An index gets internally converted to a pointer every time it's used to access the array. By storing the "current pointer" and adding 1 to that each time, then the cost of calculating the pointers from indexes 1000 times is eliminated.
This can be faster but it depends on the implementation. Compilers can do some of the same hand-optimizations, but not all of them. The rand_function should also be inline to avoid the function call overhead.
A warning however: if you use std::vector with the pointer trick then it's not thread safe, if another thread changed the vector's length during the loop then the vector can get reallocated to a different place in memory. Don't do pointer tricks unless you'd be perfectly comfortable writing your own vector, array, table classes as needed.

How to parallelize std::partition using TBB

Does anyone have any tips for efficiently parallelizing std::partition using TBB? Has this been done already?
Here is what I'm thinking:
if the array is small, std::partition it (serial) and return
else, treat the array as 2 interleaved arrays using custom iterators (interleave in cache-sized blocks)
start a parallel partition task for each pair of iterators (recurse to step 1)
swap elements between the two partition/middle pointers*
return the merged partition/middle pointer
*I am hoping in the average case this region will be small compared to the length of the array or compared to the swaps required if partitioning the array in contiguous chunks.
Any thoughts before I try it?
I'd treat it as a degenerate case of parallel sample sort. (Parallel code for sample sort can be found here.) Let N be the number of items. The degenerate sample sort will require Θ(N) temporary space, has Θ(N) work, and Θ(P+ lg N) span (critical path). The last two values are important for analysis, since speedup is limited to work/span.
I'm assuming the input is a random-access sequence. The steps are:
Allocate a temporary array big enough to hold a copy of the input sequence.
Divide the input into K blocks. K is a tuning parameter. For a system with P hardware threads, K=max(4*P,L) might be good, where L is a constant for avoiding ridiculously small blocks. The "4*P" allows some load balancing.
Move each block to its corresponding position in the temporary array and partition it using std::partition. Blocks can be processed in parallel. Remember the offset of the "middle" for each block. You might want to consider writing a custom routine that both moves (in the C++11 sense) and partitions a block.
Compute the offset to where each part of a block should go in the final result. The offsets for the first part of each block can be done using an exclusive prefix sum over the offsets of the middles from step 3. The offsets for the second part of each block can be computed similarly by using the offset of each middle relative to the end of its block. The running sums in the latter case become offsets from the end of the final output sequence. Unless you're dealing with more than 100 hardware threads, I recommend using a serial exclusive scan.
Move the two parts of each block from the temporary array back to the appropriate places in the original sequence. Copying each block can be done in parallel.
There is a way to embed the scan of step 4 into steps 3 and 5, so that the span can be reduced to Θ(lg N), but I doubt it's worth the additional complexity.
If using tbb::parallel_for loops to parallelize steps 3 and 5, consider using affinity_partitioner to help threads in step 5 pick up what they left in cache from step 3.
Note that partitioning requires only Θ(N) work for Θ(N) memory loads and stores. Memory bandwidth could easily become the limiting resource for speedup.
Why not to parallel something similar to std::partition_copy instead? The reasons are:
for std::partition, in-place swaps as in Adam's solution require logarithmic complexity due to recursive merge of the results.
you'll pay memory for parallelism anyway when using the threads and tasks.
if the objects are heavy, it is more reasonable to swap (shared) pointers anyway
if the results can be stored concurrently then threads can work independently.
It's pretty straight-forward to apply a parallel_for (for random-access iterators) or tbb::parallel_for_each (for non-random-access iterators) to start processing the input range. each task can store the 'true' and 'false' results independently. There are lots of ways to store the results, some from the top of my head:
using tbb::parallel_reduce (only for random-access iterators), store the results locally to the task body and move-append them in join() from another task
use tbb::concurrent_vector's method grow_by() to copy local results in a bunch or just push() each result separately on arrival.
cache thread-local results in tbb::combinable TLS container and combine them later
The exact semantics of std::partition_copy can be achieved by copy from the temporary storage from above or
(only for random-access output iterators) use atomic<size_t> cursors to synchronize where to store the results (assuming there is enough space)
Your approach should be correct, but why not follow the regular divide-and-conquer (or parallel_for) method? For two threads:
split the array in two. Turn your [start, end) into [start, middle), [middle, end).
run std::partition on both ranges in parallel.
merge the partitioned results. This can be done with a parallel_for.
This should make better use of the cache.
It seems to me like this should parallelize nicely, any thoughts before I try it?
Well... maybe a few:
There's no real reason to create more tasks than you have cores. Since your algorithm is recursive, you also need to keep track not to create additional threads, after you reach your limit, cause it'll just be a needless effort.
Keep in mind that splitting and merging the arrays costs you processing power, so set the split size in a way, which won't actually slow your calculations down. Splitting a 10-element array can be tempting, but wont get you where you want to be. Since the complexity of std::partition is linear, it's fairly easy to overestimate the speed of the task.
Since you asked and gave an algorithm, I hope you actually need parallelization here. If so - there's nothing much to add, the algorithm itself looks really fine :)

Concatenate 2 STL vectors in constant O(1) time

I'll give some context as to why I'm trying to do this, but ultimately the context can be ignored as it is largely a classic Computer Science and C++ problem (which must surely have been asked before, but a couple of cursory searches didn't turn up anything...)
I'm working with (large) real time streaming point clouds, and have a case where I need to take 2/3/4 point clouds from multiple sensors and stick them together to create one big point cloud. I am in a situation where I do actually need all the data in one structure, whereas normally when people are just visualising point clouds they can get away with feeding them into the viewer separately.
I'm using Point Cloud Library 1.6, and on closer inspection its PointCloud class (under <pcl/point_cloud.h> if you're interested) stores all data points in an STL vector.
Now we're back in vanilla CS land...
PointCloud has a += operator for adding the contents of one point cloud to another. So far so good. But this method is pretty inefficient - if I understand it correctly, it 1) resizes the target vector, then 2) runs through all Points in the other vector, and copies them over.
This looks to me like a case of O(n) time complexity, which normally might not be too bad, but is bad news when dealing with at least 300K points per cloud in real time.
The vectors don't need to be sorted or analysed, they just need to be 'stuck together' at the memory level, so the program knows that once it hits the end of the first vector it just has to jump to the start location of the second one. In other words, I'm looking for an O(1) vector merging method. Is there any way to do this in the STL? Or is it more the domain of something like std::list#splice?
Note: This class is a pretty fundamental part of PCL, so 'non-invasive surgery' is preferable. If changes need to be made to the class itself (e.g. changing from vector to list, or reserving memory), they have to be considered in terms of the knock on effects on the rest of PCL, which could be far reaching.
Update: I have filed an issue over at PCL's GitHub repo to get a discussion going with the library authors about the suggestions below. Once there's some kind of resolution on which approach to go with, I'll accept the relevant suggestion(s) as answers.
A vector is not a list, it represents a sequence, but with the additional requirement that elements must be stored in contiguous memory. You cannot just bundle two vectors (whose buffers won't be contiguous) into a single vector without moving objects around.
This problem has been solved many times before such as with String Rope classes.
The basic approach is to make a new container type that stores pointers to point clouds. This is like a std::deque except that yours will have chunks of variable size. Unless your clouds chunk into standard sizes?
With this new container your iterators start in the first chunk, proceed to the end then move into the next chunk. Doing random access in such a container with variable sized chunks requires a binary search. In fact, such a data structure could be written as a distorted form of B+ tree.
There is no vector equivalent of splice - there can't be, specifically because of the memory layout requirements, which are probably the reason it was selected in the first place.
There's also no constant-time way to concatenate vectors.
I can think of one (fragile) way to concatenate raw arrays in constant time, but it depends on them being aligned on page boundaries at both the beginning and the end, and then re-mapping them to be adjacent. This is going to be pretty hard to generalise.
There's another way to make something that looks like a concatenated vector, and that's with a wrapper container which works like a deque, and provides a unified iterator and operator[] over them. I don't know if the point cloud library is flexible enough to work with this, though. (Jamin's suggestion is essentially to use something like this instead of the vector, and Zan's is roughly what I had in mind).
No, you can't concatenate two vectors by a simple link, you actually have to copy them.
However! If you implement move-semantics in your element type, you'd probably get significant speed gains, depending on what your element contains. This won't help if your elements don't contain any non-trivial types.
Further, if you have your vector reserve way in advance the memory needed, then that'd also help speed things up by not requiring a resize (which would cause an undesired huge new allocation, possibly having to defragment at that memory size, and then a huge memcpy).
Barring that, you might want to create some kind of mix between linked-lists and vectors, with each 'element' of the list being a vector with 10k elements, so you only need to jump list links once every 10k elements, but it allows you to dynamically grow much easier, and make your concatenation breeze.
std::list<std::vector<element>> forIllustrationOnly; //Just roll your own custom type.
index = 52403;
listIndex = index % 1000
vectorIndex = index / 1000
forIllustrationOnly[listIndex][vectorIndex] = still fairly fast lookups
forIllustrationOnly[listIndex].push_back(vector-of-points) = much faster appending and removing of blocks of points.
You will not get this scaling behaviour with a vector, because with a vector, you do not get around the copying. And you can not copy an arbitrary amount of data in fixed time.
I do not know PointCloud, but if you can use other list types, e.g. a linked list, this behaviour is well possible. You might find a linked list implementation which works in your environment, and which can simply stick the second list to the end of the first list, as you imagined.
Take a look at Boost range joint at http://www.boost.org/doc/libs/1_54_0/libs/range/doc/html/range/reference/utilities/join.html
This will take 2 ranges and join them. Say you have vector1 and vector 2.
You should be able to write
auto combined = join(vector1,vector2).
Then you can use combined with algorithms, etc as needed.
No O(1) copy for vector, ever, but, you should check:
Is the element type trivially copyable? (aka memcpy)
Iff, is my vector implementation leveraging this fact, or is it stupidly looping over all 300k elements executing a trivial assignment (or worse, copy-ctor-call) for each element?
What I have seen is that, while both memcpyas well as an assignment-for-loop have O(n) complexity, a solution leveraging memcpy can be much, much faster.
So, the problem might be that the vector implementation is suboptimal for trivial types.

Why is vector faster than map in one test, but not the other?

I've always been told vectors are fast, and in my years of programming experience, I've never seen anything to contract that. I decided to (prematurely optimize and) write a associative class that was a thin wrapper around a sequential container (namely ::std::vector and provided the same interface as ::std::map. Most of the code was really easy, and I got it working with little difficulty.
However, in my tests of various sized POD types (4 to 64 bytes), and std::strings, with counts varying from eight to two-thousand, ::std::map::find was faster than my ::associative::find, usually around 15% faster, for almost all tests. I made a Short, Self Contained, Correct (Compilable), Example that clearly shows this at ideone I checked MSVC9's implementation of ::std::map::find and confirmed that it matches my vecfind and ::std::lower_bound code quite closely, and cannot account for why the ::std::map::find runs faster, except for a discussion on Stack Overflow where people speculated that the binary search method does not benefit at all from the locality of the vector until the last comparison (making it no faster), and that it's requires pointer arithmetic that ::std::map nodes don't require, making it slower.
Today someone challenged me on this, and provided this code at ideone, which when I tested, showed the vector to be over twice as fast.
Do the coders of StackOverflow want to enlighten me on this apparent discrepancy? I've gone over both sets of code, and they seem euqivalent to me, but maybe I'm blind from playing with both of them so much.
(Footnote: this is very close to one of my previous questions, but my code had several bugs which were addressed. Due to new information/code, I felt this was different enough to justify a separate question. If not, I'll work on merging them.)
What makes you think that mapfind() is faster than vecfind()?
The ideone output for your code reports about 50% more ticks for mapfind() than for vecfind(). Running the code here (x86_64 linux, g++-4.5.1), mapfind() takes about twice as long as vecfind().
Making the map/vector larger by a factor of 10, the difference increases to about 3×.
Note however that the sum of the second components is different. The map contains only one pair with any given first component (with my local PRNG, that creates a map two elements short), while the vector can contain multiple such pairs.
The number of elements you're putting into your test container are more than the number of possible outputs from rand() in Microsoft, thus you're getting repeated numbers. The sorted vector will contain all of them while the map will throw out the duplicates. Check the sizes after filling them - the vector will have 100000 elements, the map 32768. Since the map is much shorter, of course it will have better performance.
Try a multimap for an apples-to-apples comparison.
I see some problems with the the code ( http://ideone.com/41iKt ) you posted on ideone.com. (ideone actually shows vector as faster, but a local build with the Visual Studio 11 Developer Preview shows map faster).
First I moved the map variable and used it to initialize the vector to get the same element ordering and uniquing, and then I gave lower_bound a custom comparator that only compares first, since that's what map will be doing. After these changes Visual Studio 11 shows the vector as faster for the same 100,000 elements (although the ideone time doesn't change significantly). http://ideone.com/b3OnH
With test_size reduced to 8 map is still faster. This isn't surprising because this is the way algorithm complexity works, all the constants in the function that truly describes the run time matter with small N. I have to raise test_size to about 2700 for vector to pull even and then ahead of map on this system.
A sorted std::vector has two advantages over std::map:
Better data locality: vector stores all data contiguously in memory
Smaller memory footprint: vector does not need much bookkeeping data (e.g., no tree node objects)
Whether these two effects matter depend on the scenario. There are two factors that are likely to have a major impact:
Data type
It is an advantage for the std::vector if the elements are primitive types like integers. In that case, the locality really helps because all data needed by the search is in a contiguous location in memory.
If the elements are say strings, then the locality does not help that much. The contiguous vector memory now only stores pointers objects that are potentially all over the heap.
Data size
If the std::vector fits into a particular cache level but the std::map does not, the std::vector will have an advantage. This is especially the case if you keep repeating the test over the same data.