Improving performance when randomizing a std::list - c++

I have a std::list which I am currently randomizing using a Fisher-Yates shuffle (see http://en.wikipedia.org/wiki/Fisher-Yates_shuffle). To summarize, my code carries out the following steps on the list:
Loop through each element of the list.
Swap the element with a randomly chosen element from the current position onwards, including itself.
Because lists don't provide random access, this means that I am iterating over the entire list in step 1, and for each element I'm iterating again, on average over half the remaining elements from that point onwards. This is a major bottleneck in my program's performance, so I'm looking to improve it. For other reasons I need to continue using list as my container, but I'm considering converting to a vector at the start of my randomize function, and then converting back to list at the end. My lists typically contain 300 - 400 items, so I would guess that the cost of conversion between containers will be worth it to avoid traversing the items sequentially.
My question is: does this seem like the best way to go about optimizing the code? Is there a better way?

One easy improvement is to copy the data into a vector, shuffle the vector, and copy it back into a list. That is what was suggested in comments by Max and PeskyGnat:
vector<int> myVector(myList.size());
copy(myList.begin(), myList.end(), myVector.begin());
random_shuffle(myVector.begin(), myVector.end());
list<int> myListShuffled(myVector.begin(), myVector.end());
This implementation is pretty fast. But, it will do three passes over the vector, and you can get it down to two passes by implementing the shuffle yourself:
vector<int> myVector(myList.size());
int lastPos = 0;
for(list<int>::iterator it = myList.begin(); it != myList.end(); it++, lastPos++) {
int insertPos = rand() % (lastPos + 1);
if (insertPos < lastPos) {
myVector[lastPos] = myVector[insertPos];
}
myVector[insertPos] = *it;
}
list<int> myListShuffled(myVector.begin(), myVector.end());
Since the first version is much easier to understand and much less error-prone, it's almost always preferable... unless perhaps this bit of code is critical for your performance (and you confirmed that with measurement.)
EDIT: By the way, since you are looking at the Wikipedia article, the second code sample uses the "inside-out" variant of Fisher-Yates.

Related

Is there an even faster approach than swap-and-pop for erasing from std::vector?

I am asking this as the other relevant questions on SO seem to be either for older versions of the C++ standard, do not mention any form of parallelization, or are focused on keeping the ordering/indexing the same as elements are removed.
I have a vector of potentially hundreds of thousands or millions of elements (which are fairly light structures, around ~20 bytes assuming they're compacted down).
Due to other restrictions, it must be a std::vector and other containers would not work (like std::forward_list), or be even less optimal in other uses.
I recently swapped from simple it = std::erase(it) approach to using pop-and-swap using something like this:
for(int i = 0; i < myVec.size();) {
// Do calculations to determine if element must be removed
// ...
// Remove if needed
if(elementMustBeRemoved) {
myVec[i] = myVec.back();
myVec.pop_back();
} else {
i++;
}
}
This works, and was a significant improvement. It cut the runtime of the method down to ~61% of what it was previously. But I would like to improve this further.
Does C++ have a method to remove many non-consecutive elements from a std::vector efficiently? Like passing a vector of indices to erase() and have C++ do some magic under the hood to minimize movement of data?
If so, I could have threads individually gather indices that must be removed in parallel, and then combine them and pass them to erase().
Take a look at std::remove_if algorithm. You could use it like this:
auto firstToErase = std::remove_if(myVec.begin(), myVec.end(),
[](const & T x){
// Do calculations to determine if element must be removed
// ...
return elementMustBeRemoved;});
myVec.erase(firstToErase, myVec.end());
cppreference says that following code is a possible implementation for remove_if:
template<class ForwardIt, class UnaryPredicate>
ForwardIt remove_if(ForwardIt first, ForwardIt last, UnaryPredicate p)
{
first = std::find_if(first, last, p);
if (first != last)
for(ForwardIt i = first; ++i != last; )
if (!p(*i))
*first++ = std::move(*i);
return first;
}
Instead of swapping with the last element it continuously moves through a container building up a range of elements which should be erased, until this range is at the very end of vector. This looks like a more cache-friendly solution and you might notice some performance improvement on a very big vector.
If you want to experiment with a parallel version, there is a version (4) which allows to specify execution policy.
Or, since C++20 you can type sligthly less and use erase_if.
However, in such case you lose the option to choose execution policy.
Is there an even faster approach than swap-and-pop for erasing from std::vector?
Ever since C++11, the optimal removal of single element from vector without preserving order has been move-and-pop rather than swap-and-pop.
Does C++ have a method to remove many non-consecutive elements from a std::vector efficiently?
The remove-erase (std::erase in C++20) idiom is the most efficient that the standard provides. std::remove_if does preserve order, and if you don't care about that, then a more efficient algorithm may be possible. But standard library does not come with unstable remove out of the box. The algorithm goes as follows:
Find first element to be removed (a)
Find last element to not be removed (b)
Move b to a.
Repeat between a and b until iterators meet.
There is a proposal P0048 to add such algorithm to the standard library, and there is a demo implementation in https://github.com/WG21-SG14/SG14/blob/6c5edd5c34e1adf42e69b25ddc57c17d99224bb4/SG14/algorithm_ext.h#L84

How to add an element to the front of a vector in C++? [duplicate]

iterator insert ( iterator position, const T& x );
Is the function declaration of the insert operator of the std::Vector class.
This function's return type is an iterator pointing to the inserted element. My question is, given this return type, what is the most efficient way (this is part of a larger program I am running where speed is of the essence, so I am looking for the most computationally efficient way) of inserting at the beginning. Is it the following?
//Code 1
vector<int> intvector;
vector<int>::iterator it;
it = myvector.begin();
for(int i = 1; i <= 100000; i++){
it = intvector.insert(it,i);
}
Or,
//Code 2
vector<int> intvector;
for(int i = 1; i <= 100000; i++){
intvector.insert(intvector.begin(),i);
}
Essentially, in Code 2, is the parameter,
intvector.begin()
"Costly" to evaluate computationally as compared to using the returned iterator in Code 1 or should both be equally cheap/costly?
If one of the critical needs of your program is to insert elements at the begining of a container: then you should use a std::deque and not a std::vector. std::vector is only good at inserting elements at the end.
Other containers have been introduced in C++11. I should start to find an updated graph with these new containers and insert it here.
The efficiency of obtaining the insertion point won't matter in the least - it will be dwarfed by the inefficiency of constantly shuffling the existing data up every time you do an insertion.
Use std::deque for this, that's what it was designed for.
An old thread, but it showed up at a coworker's desk as the first search result for a Google query.
There is one alternative to using a deque that is worth considering:
std::vector<T> foo;
for (int i = 0; i < 100000; ++i)
foo.push_back(T());
std::reverse( foo.begin(), foo.end() );
You still use a vector which is significantly more engineered than deque for performance. Also, swaps (which is what reverse uses) are quite efficient. On the other hand, the complexity, while still linear, is increased by 50%.
As always, measure before you decide what to do.
If you're looking for a computationally efficient way of inserting at the front, then you probably want to use a deque instead of a vector.
Most likely deque is the appropriate solution as suggested by others. But just for completeness, suppose that you need to do this front-insertion just once, that elsewhere in the program you don't need to do other operations on the front, and that otherwise vector provides the interface you need. If all of those are true, you could add the items with the very efficient push_back and then reverse the vector to get everything in order. That would have linear complexity rather than polynomial as it would when inserting at the front.
When you use a vector, you usually know the actual number of elements it is going to have. In this case, reserving the needed number of elements (100000 in the case you show) and filling them by using the [] operator is the fastest way. If you really need an efficient insert at the front, you can use deque or list, depending on your algorithms.
You may also consider inverting the logic of your algorithm and inserting at the end, that is usually faster for vectors.
I think you should change the type of your container if you really want to insert data at the beginning. It's the reason why vector does not have push_front() member function.
Intuitively, I agree with #Happy Green Kid Naps and ran a small test showing that for small sizes (1 << 10 elements of a primitive data type) it doesn't matter. For larger container sizes (1 << 20), however, std::deque seems to be of higher performance than reversing an std::vector. So, benchmark before you decide. Another factor might be the element type of the container.
Test 1: push_front (a) 1<<10 or (b) 1<<20 uint64_t into std::deque
Test 2: push_back (a) 1<<10 or (b) 1<<20 uint64_t into std::vector followed by std::reverse
Results:
Test 1 - deque (a) 19 µs
Test 2 - vector (a) 19 µs
Test 1 - deque (b) 6339 µs
Test 2 - vector (b) 10588 µs
You can support-
Insertion at front.
Insertion at the end.
Changing value at any position (won't present in deque)
Accessing value at any index (won't present in deque)
All above operations in O(1) time complexity
Note: You just need to know the upper bound on max_size it can go in left and right.
class Vector{
public:
int front,end;
int arr[100100]; // you should set this in according to 2*max_size
Vector(int initialize){
arr[100100/2] = initialize; // initializing value
front = end = 100100/2;
front--;end++;
}
void push_back(int val){
arr[end] = val;
end++;
}
void push_front(int val){
if(front<0){return;} // you should set initial size accordingly
arr[front] = val;
front--;
}
int value(int idx){
return arr[front+idx];
}
// similarity create function to change on any index
};
int main(){
Vector v(2);
for(int i=1;i<100;i++){
// O(1)
v.push_front(i);
}
for(int i=0;i<20;i++){
// to access the value in O(1)
cout<<v.value(i)<<" ";
}
return;
}
This may draw the ire of some because it does not directly answer the question, but it may help to keep in mind that retrieving the items from a std::vector in reverse order is both easy and fast.

count the number of distinct absolute values among the elements of the array

I was asked an interview question to find the number of distinct absolute values among the elements of the array. I came up with the following solution (in C++) but the interviewer was not happy with the code's run time efficiency.
I will appreciate pointers as to how I can improve the run time efficiency of this code?
Also how do I calculate the efficiency of the code below? The for loop executes A.size() times. However I am not sure about the efficiency of STL std::find (In the worse case it could be O(n) so that makes this code O(n²) ?
Code is:
int countAbsoluteDistinct ( const std::vector<int> &A ) {
using namespace std;
list<int> x;
vector<int>::const_iterator it;
for(it = A.begin();it < A.end();it++)
if(find(x.begin(),x.end(),abs(*it)) == x.end())
x.push_back(abs(*it));
return x.size();
}
To propose alternative code to the set code.
Note that we don't want to alter the caller's vector, we take by value. It's better to let the compiler copy for us than make our own. If it's ok to destroy their value we can take by non-const reference.
#include <vector>
#include <algorithm>
#include <iterator>
#include <cstdlib>
using namespace std;
int count_distinct_abs(vector<int> v)
{
transform(v.begin(), v.end(), v.begin(), abs); // O(n) where n = distance(v.end(), v.begin())
sort(v.begin(), v.end()); // Average case O(n log n), worst case O(n^2) (usually implemented as quicksort.
// To guarantee worst case O(n log n) replace with make_heap, then sort_heap.
// Unique will take a sorted range, and move things around to get duplicated
// items to the back and returns an iterator to the end of the unique section of the range
auto unique_end = unique(v.begin(), v.end()); // Again n comparisons
return distance(v.begin(), unique_end); // Constant time for random access iterators (like vector's)
}
The advantage here is that we only allocate/copy once if we decide to take by value, and the rest is all done in-place while still giving you an average complexity of O(n log n) on the size of v.
std::find() is linear (O(n)). I'd use a sorted associative container to handle this, specifically std::set.
#include <vector>
#include <set>
using namespace std;
int distict_abs(const vector<int>& v)
{
std::set<int> distinct_container;
for(auto curr_int = v.begin(), end = v.end(); // no need to call v.end() multiple times
curr_int != end;
++curr_int)
{
// std::set only allows single entries
// since that is what we want, we don't care that this fails
// if the second (or more) of the same value is attempted to
// be inserted.
distinct_container.insert(abs(*curr_int));
}
return distinct_container.size();
}
There is still some runtime penalty with this approach. Using a separate container incurs the cost of dynamic allocations as the container size increases. You could do this in place and not occur this penalty, however with code at this level its sometimes better to be clear and explicit and let the optimizer (in the compiler) do its work.
Yes, this will be O(N2) -- you'll end up with a linear search for each element.
A couple of reasonably obvious alternatives would be to use an std::set or std::unordered_set. If you don't have C++0x, you can replace std::unordered_set with tr1::unordered_set or boost::unordered_set.
Each insertion in an std::set is O(log N), so your overall complexity is O(N log N).
With unordered_set, each insertion has constant (expected) complexity, giving linear complexity overall.
Basically, replace your std::list with a std::set. This gives you O(log(set.size())) searches + O(1) insertions, if you do things properly. Also, for efficiency, it makes sense to cache the result of abs(*it), although this will have only a minimal (negligible) effect. The efficiency of this method is about as good as you can get it, without using a really nice hash (std::set uses bin-trees) or more information about the values in the vector.
Since I was not happy with the previous answer here is mine today. Your intial question does not mention how big your vector is. Suppose your std::vector<> is extremely large and have very few duplicates (why not?). This means that using another container (eg. std::set<>) will basically duplicate your memory consumption. Why would you do that since your goal is simply to count non duplicate.
I like #Flame answer, but I was not really happy with the call to std::unique. You've spent lots of time carefully sorting your vector and then simply discard the sorted array while you could be re-using it afterward.
I could not find anything really elegant in the STD library, so here is my proposal (a mixture of std::transform + std::abs + std::sort, but without touching the sorted array afterward).
// count the number of distinct absolute values among the elements of the sorted container
template<class ForwardIt>
typename std::iterator_traits<ForwardIt>::difference_type
count_unique(ForwardIt first, ForwardIt last)
{
if (first == last)
return 0;
typename std::iterator_traits<ForwardIt>::difference_type
count = 1;
ForwardIt previous = first;
while (++first != last) {
if (!(*previous == *first) ) ++count;
++previous;
}
return count;
}
Bonus point is works with forward iterator:
#include <iostream>
#include <list>
int main()
{
std::list<int> nums {1, 3, 3, 3, 5, 5, 7,8};
std::cout << count_unique( std::begin(nums), std::end(nums) ) << std::endl;
const int array[] = { 0,0,0,1,2,3,3,3,4,4,4,4};
const int n = sizeof array / sizeof * array;
std::cout << count_unique( array, array + n ) << std::endl;
return 0;
}
Two points.
std::list is very bad for search. Each search is O(n).
Use std::set. Insert is logarithmic, it removes duplicate and is sorted. Insert every value O(n log n) then use set::size to find how many values.
EDIT:
To answer part 2 of your question, the C++ standard mandates the worst case for operations on containers and algorithms.
Find: Since you are using the free function version of find which takes iterators, it cannot assume anything about the passed in sequence, it cannot assume that the range is sorted, so it must traverse every item until it finds a match, which is O(n).
If you are using set::find on the other hand, this member find can utilize the structure of the set, and it's performance is required to be O(log N) where N is the size of the set.
To answer your second question first, yes the code is O(n^2) because the complexity of find is O(n).
You have options to improve it. If the range of numbers is low you can just set up a large enough array and increment counts while iterating over the source data. If the range is larger but sparse, you can use a hash table of some sort to do the counting. Both of these options are linear complexity.
Otherwise, I would do one iteration to take the abs value of each item, then sort them, and then you can do the aggregation in a single additional pass. The complexity here is n log(n) for the sort. The other passes don't matter for complexity.
I think a std::map could also be interesting:
int absoluteDistinct(const vector<int> &A)
{
map<int, char> my_map;
for (vector<int>::const_iterator it = A.begin(); it != A.end(); it++)
{
my_map[abs(*it)] = 0;
}
return my_map.size();
}
As #Jerry said, to improve a little on the theme of most of the other answers, instead of using a std::map or std::set you could use a std::unordered_map or std::unordered_set (or the boost equivalent).
This would reduce the runtimes down from O(n lg n) or O(n).
Another possibility, depending on the range of the data given, you might be able to do a variant of a radix sort, though there's nothing in the question that immediately suggests this.
Sort the list with a Radix style sort for O(n)ish efficiency. Compare adjacent values.
The best way is to customize the quicksort algorithm such that when we are partitioning whenever we get two equal element then overwrite the second duplicate with last element in the range and then reduce the range. This will ensure you will not process duplicate elements twice. Also after quick sort is done the range of the element is answer
Complexity is still O(n*Lg-n) BUT this should save atleast two passes over the array.
Also savings are proportional to % of duplicates. Imagine if they twist original questoin with, 'say 90% of the elements are duplicate' ...
One more approach :
Space efficient : Use hash map .
O(logN)*O(n) for insert and just keep the count of number of elements successfully inserted.
Time efficient : Use hash table O(n) for insert and just keep the count of number of elements successfully inserted.
You have nested loops in your code. If you will scan each element over the whole array it will give you O(n^2) time complexity which is not acceptable in most of the scenarios. That was the reason the Merge Sort and Quick sort algorithms came up to save processing cycles and machine efforts. I will suggest you to go through the suggested links and redesign your program.

Inserting into a vector at the front

iterator insert ( iterator position, const T& x );
Is the function declaration of the insert operator of the std::Vector class.
This function's return type is an iterator pointing to the inserted element. My question is, given this return type, what is the most efficient way (this is part of a larger program I am running where speed is of the essence, so I am looking for the most computationally efficient way) of inserting at the beginning. Is it the following?
//Code 1
vector<int> intvector;
vector<int>::iterator it;
it = myvector.begin();
for(int i = 1; i <= 100000; i++){
it = intvector.insert(it,i);
}
Or,
//Code 2
vector<int> intvector;
for(int i = 1; i <= 100000; i++){
intvector.insert(intvector.begin(),i);
}
Essentially, in Code 2, is the parameter,
intvector.begin()
"Costly" to evaluate computationally as compared to using the returned iterator in Code 1 or should both be equally cheap/costly?
If one of the critical needs of your program is to insert elements at the begining of a container: then you should use a std::deque and not a std::vector. std::vector is only good at inserting elements at the end.
Other containers have been introduced in C++11. I should start to find an updated graph with these new containers and insert it here.
The efficiency of obtaining the insertion point won't matter in the least - it will be dwarfed by the inefficiency of constantly shuffling the existing data up every time you do an insertion.
Use std::deque for this, that's what it was designed for.
An old thread, but it showed up at a coworker's desk as the first search result for a Google query.
There is one alternative to using a deque that is worth considering:
std::vector<T> foo;
for (int i = 0; i < 100000; ++i)
foo.push_back(T());
std::reverse( foo.begin(), foo.end() );
You still use a vector which is significantly more engineered than deque for performance. Also, swaps (which is what reverse uses) are quite efficient. On the other hand, the complexity, while still linear, is increased by 50%.
As always, measure before you decide what to do.
If you're looking for a computationally efficient way of inserting at the front, then you probably want to use a deque instead of a vector.
Most likely deque is the appropriate solution as suggested by others. But just for completeness, suppose that you need to do this front-insertion just once, that elsewhere in the program you don't need to do other operations on the front, and that otherwise vector provides the interface you need. If all of those are true, you could add the items with the very efficient push_back and then reverse the vector to get everything in order. That would have linear complexity rather than polynomial as it would when inserting at the front.
When you use a vector, you usually know the actual number of elements it is going to have. In this case, reserving the needed number of elements (100000 in the case you show) and filling them by using the [] operator is the fastest way. If you really need an efficient insert at the front, you can use deque or list, depending on your algorithms.
You may also consider inverting the logic of your algorithm and inserting at the end, that is usually faster for vectors.
I think you should change the type of your container if you really want to insert data at the beginning. It's the reason why vector does not have push_front() member function.
Intuitively, I agree with #Happy Green Kid Naps and ran a small test showing that for small sizes (1 << 10 elements of a primitive data type) it doesn't matter. For larger container sizes (1 << 20), however, std::deque seems to be of higher performance than reversing an std::vector. So, benchmark before you decide. Another factor might be the element type of the container.
Test 1: push_front (a) 1<<10 or (b) 1<<20 uint64_t into std::deque
Test 2: push_back (a) 1<<10 or (b) 1<<20 uint64_t into std::vector followed by std::reverse
Results:
Test 1 - deque (a) 19 µs
Test 2 - vector (a) 19 µs
Test 1 - deque (b) 6339 µs
Test 2 - vector (b) 10588 µs
You can support-
Insertion at front.
Insertion at the end.
Changing value at any position (won't present in deque)
Accessing value at any index (won't present in deque)
All above operations in O(1) time complexity
Note: You just need to know the upper bound on max_size it can go in left and right.
class Vector{
public:
int front,end;
int arr[100100]; // you should set this in according to 2*max_size
Vector(int initialize){
arr[100100/2] = initialize; // initializing value
front = end = 100100/2;
front--;end++;
}
void push_back(int val){
arr[end] = val;
end++;
}
void push_front(int val){
if(front<0){return;} // you should set initial size accordingly
arr[front] = val;
front--;
}
int value(int idx){
return arr[front+idx];
}
// similarity create function to change on any index
};
int main(){
Vector v(2);
for(int i=1;i<100;i++){
// O(1)
v.push_front(i);
}
for(int i=0;i<20;i++){
// to access the value in O(1)
cout<<v.value(i)<<" ";
}
return;
}
This may draw the ire of some because it does not directly answer the question, but it may help to keep in mind that retrieving the items from a std::vector in reverse order is both easy and fast.

std::map and performance, intersecting sets

I'm intersecting some sets of numbers, and doing this by storing a count of each time I see a number in a map.
I'm finding the performance be very slow.
Details:
- One of the sets has 150,000 numbers in it
- The intersection of that set and another set takes about 300ms the first time, and about 5000ms the second time
- I haven't done any profiling yet, but every time I break the debugger while doing the intersection its in malloc.c!
So, how can I improve this performance? Switch to a different data structure? Some how improve the memory allocation performance of map?
Update:
Is there any way to ask std::map or
boost::unordered_map to pre-allocate
some space?
Or, are there any tips for using these efficiently?
Update2:
See Fast C++ container like the C# HashSet<T> and Dictionary<K,V>?
Update3:
I benchmarked set_intersection and got horrible results:
(set_intersection) Found 313 values in the intersection, in 11345ms
(set_intersection) Found 309 values in the intersection, in 12332ms
Code:
int runIntersectionTestAlgo()
{
set<int> set1;
set<int> set2;
set<int> intersection;
// Create 100,000 values for set1
for ( int i = 0; i < 100000; i++ )
{
int value = 1000000000 + i;
set1.insert(value);
}
// Create 1,000 values for set2
for ( int i = 0; i < 1000; i++ )
{
int random = rand() % 200000 + 1;
random *= 10;
int value = 1000000000 + random;
set2.insert(value);
}
set_intersection(set1.begin(),set1.end(), set2.begin(), set2.end(), inserter(intersection, intersection.end()));
return intersection.size();
}
You should definitely be using preallocated vectors which are way faster. The problem with doing set intersection with stl sets is that each time you move to the next element you're chasing a dynamically allocated pointer, which could easily not be in your CPU caches. With a vector the next element will often be in your cache because it's physically close to the previous element.
The trick with vectors, is that if you don't preallocate the memory for a task like this, it'll perform EVEN WORSE because it'll go on reallocating memory as it resizes itself during your initialization step.
Try something like this instaed - it'll be WAY faster.
int runIntersectionTestAlgo() {
vector<char> vector1; vector1.reserve(100000);
vector<char> vector2; vector2.reserve(1000);
// Create 100,000 values for set1
for ( int i = 0; i < 100000; i++ ) {
int value = 1000000000 + i;
set1.push_back(value);
}
sort(vector1.begin(), vector1.end());
// Create 1,000 values for set2
for ( int i = 0; i < 1000; i++ ) {
int random = rand() % 200000 + 1;
random *= 10;
int value = 1000000000 + random;
set2.push_back(value);
}
sort(vector2.begin(), vector2.end());
// Reserve at most 1,000 spots for the intersection
vector<char> intersection; intersection.reserve(min(vector1.size(),vector2.size()));
set_intersection(vector1.begin(), vector1.end(),vector2.begin(), vector2.end(),back_inserter(intersection));
return intersection.size();
}
Without knowing any more about your problem, "check with a good profiler" is the best general advise I can give. Beyond that...
If memory allocation is your problem, switch to some sort of pooled allocator that reduces calls to malloc. Boost has a number of custom allocators that should be compatible with std::allocator<T>. In fact, you may even try this before profiling, if you've already noticed debug-break samples always ending up in malloc.
If your number-space is known to be dense, you can switch to using a vector- or bitset-based implementation, using your numbers as indexes in the vector.
If your number-space is mostly sparse but has some natural clustering (this is a big if), you may switch to a map-of-vectors. Use higher-order bits for map indexing, and lower-order bits for vector indexing. This is functionally very similar to simply using a pooled allocator, but it is likely to give you better caching behavior. This makes sense, since you are providing more information to the machine (clustering is explicit and cache-friendly, rather than a random distribution you'd expect from pool allocation).
I would second the suggestion to sort them. There are already STL set algorithms that operate on sorted ranges (like set_intersection, set_union, etc):
set_intersection
I don't understand why you have to use a map to do intersection. Like people have said, you could put the sets in std::set's, and then use std::set_intersection().
Or you can put them into hash_set's. But then you would have to implement intersection manually: technically you only need to put one of the sets into a hash_set, and then loop through the other one, and test if each element is contained in the hash_set.
Intersection with maps are slow, try a hash_map. (however, this is not provided in all STL implementation.
Alternatively, sort both map and do it in a merge-sort-like way.
What is your intersection algorithm? Maybe there are some improvements to be made?
Here is an alternate method
I do not know it to be faster or slower, but it could be something to try. Before doing so, I also recommend using a profiler to ensure you really are working on the hotspot. Change the sets of numbers you are intersecting to use std::set<int> instead. Then iterate through the smallest one looking at each value you find. For each value in the smallest set, use the find method to see if the number is present in each of the other sets (for performance, search from smallest to largest).
This is optimised in the case that the number is not found in all of the sets, so if the intersection is relatively small, it may be fast.
Then, store the intersection in std::vector<int> instead - insertion using push_back is also very fast.
Here is another alternate method
Change the sets of numbers to std::vector<int> and use std::sort to sort from smallest to largest. Then use std::binary_search to find the values, using roughly the same method as above. This may be faster than searching a std::set since the array is more tightly packed in memory. Actually, never mind that, you can then just iterate through the values in lock-step, looking at the ones with the same value. Increment only the iterators which are less than the minimum value you saw at the previous step (if the values were different).
Might be your algorithm. As I understand it, you are spinning over each set (which I'm hoping is a standard set), and throwing them into yet another map. This is doing a lot of work you don't need to do, since the keys of a standard set are in sorted order already. Instead, take a "merge-sort" like approach. Spin over each iter, dereferencing to find the min. Count the number that have that min, and increment those. If the count was N, add it to the intersection. Repeat until the first map hits it's end (If you compare the sizes before starting, you won't have to check every map's end each time).
Responding to update: There do exist faculties to speed up memory allocation by pre-reserving space, like boost::pool_alloc. Something like:
std::map<int, int, std::less<int>, boost::pool_allocator< std::pair<int const, int> > > m;
But honestly, malloc is pretty good at what it does; I'd profile before doing anything too extreme.
Look at your algorithms, then choose the proper data type. If you're going to have set-like behaviour, and want to do intersections and the like, std::set is the container to use.
Since it's elements are stored in a sorted way, insertion may cost you O(log N), but intersection with another (sorted!) std::set can be done in linear time.
I figured something out: if I attach the debugger to either RELEASE or DEBUG builds (e.g. hit F5 in the IDE), then I get horrible times.