How to group a vector pair by the second value efficiently? - c++

I am trying to group the pair vector vector<pair<int,int>> by the second value of it. For example, if the pair is v0 : (0,1),(1,1),(3,2),(4,2),(5,1). I want to get two outputs. The first one is the unique element of the second elements, which is
vector<int> v2={1,2};
The second is groups of the first elements, which could be
vector<vector<int>>v1;
v1[0]={0,1,5};
v1[1]={3,4};
How to achieve this in a efficient way? Do I need to sort the v0 by the second element at first before the group process? Does std::map is a faster way? Not only the method, I also concern about the speed. Because my v0 is a very long and unsorted triangle mesh vertices index list. Any suggestion will be appreciate.
Updated, I found one solution similar to link. It is in an unsorted way. I have no idea about its speed.
map<int, vector<int> > vpmap;
for (auto it = v0.begin(); it != v0.end(); ++it) {
vpmap[(*it).second].push_back((*it).first);
};
in which, vpmap.first is corresponding to v2; and vpmap.second is corresponding to v1.

What you have is a reasonably performant way of getting the exact data structures you're looking for. Be sure you pre-allocate the vectors since you know the size, and use move iterators to avoid unnecessary copying:
std::vector<int> v0;
std::vector<std::vector<int>> v1;
v0.reserve(vpmap.size());
std::transform(vpmap.begin(), vpmap.end(), std::back_inserter(v0), [](auto p) { return p.first; });
v1.reserve(vpmap.size());
std::transform(make_move_iterator(vpmap.begin()), make_move_iterator(vpmap.end()), std::back_inserter(v1), [](auto p) { return p.second; });
If you can loosen your constraints, do think about big-picture optimizations like "do I need to transform all this data?"
But once you have something reasonable, stop worrying about the fastest techniques or containers or whatever, and start measuring with a profiler. Sometimes the stuff you worry about winds up being a non-issue and there are non-obvious costs that stem from your problem domain and input data and accumulation of code

Related

Is there an even faster approach than swap-and-pop for erasing from std::vector?

I am asking this as the other relevant questions on SO seem to be either for older versions of the C++ standard, do not mention any form of parallelization, or are focused on keeping the ordering/indexing the same as elements are removed.
I have a vector of potentially hundreds of thousands or millions of elements (which are fairly light structures, around ~20 bytes assuming they're compacted down).
Due to other restrictions, it must be a std::vector and other containers would not work (like std::forward_list), or be even less optimal in other uses.
I recently swapped from simple it = std::erase(it) approach to using pop-and-swap using something like this:
for(int i = 0; i < myVec.size();) {
// Do calculations to determine if element must be removed
// ...
// Remove if needed
if(elementMustBeRemoved) {
myVec[i] = myVec.back();
myVec.pop_back();
} else {
i++;
}
}
This works, and was a significant improvement. It cut the runtime of the method down to ~61% of what it was previously. But I would like to improve this further.
Does C++ have a method to remove many non-consecutive elements from a std::vector efficiently? Like passing a vector of indices to erase() and have C++ do some magic under the hood to minimize movement of data?
If so, I could have threads individually gather indices that must be removed in parallel, and then combine them and pass them to erase().
Take a look at std::remove_if algorithm. You could use it like this:
auto firstToErase = std::remove_if(myVec.begin(), myVec.end(),
[](const & T x){
// Do calculations to determine if element must be removed
// ...
return elementMustBeRemoved;});
myVec.erase(firstToErase, myVec.end());
cppreference says that following code is a possible implementation for remove_if:
template<class ForwardIt, class UnaryPredicate>
ForwardIt remove_if(ForwardIt first, ForwardIt last, UnaryPredicate p)
{
first = std::find_if(first, last, p);
if (first != last)
for(ForwardIt i = first; ++i != last; )
if (!p(*i))
*first++ = std::move(*i);
return first;
}
Instead of swapping with the last element it continuously moves through a container building up a range of elements which should be erased, until this range is at the very end of vector. This looks like a more cache-friendly solution and you might notice some performance improvement on a very big vector.
If you want to experiment with a parallel version, there is a version (4) which allows to specify execution policy.
Or, since C++20 you can type sligthly less and use erase_if.
However, in such case you lose the option to choose execution policy.
Is there an even faster approach than swap-and-pop for erasing from std::vector?
Ever since C++11, the optimal removal of single element from vector without preserving order has been move-and-pop rather than swap-and-pop.
Does C++ have a method to remove many non-consecutive elements from a std::vector efficiently?
The remove-erase (std::erase in C++20) idiom is the most efficient that the standard provides. std::remove_if does preserve order, and if you don't care about that, then a more efficient algorithm may be possible. But standard library does not come with unstable remove out of the box. The algorithm goes as follows:
Find first element to be removed (a)
Find last element to not be removed (b)
Move b to a.
Repeat between a and b until iterators meet.
There is a proposal P0048 to add such algorithm to the standard library, and there is a demo implementation in https://github.com/WG21-SG14/SG14/blob/6c5edd5c34e1adf42e69b25ddc57c17d99224bb4/SG14/algorithm_ext.h#L84

unordered set intersection in C++

Here is my code, wondering any ideas to make it faster? My implementation is brute force, which is for any elements in a, try to find if it also in b, if so, put in result set c. Any smarter ideas is appreciated.
#include <iostream>
#include <unordered_set>
int main() {
std::unordered_set<int> a = {1,2,3,4,5};
std::unordered_set<int> b = {3,4,5,6,7};
std::unordered_set<int> c;
for (auto i = a.begin(); i != a.end(); i++) {
if (b.find(*i) != b.end()) c.insert(*i);
}
for (int v : c) {
std::printf("%d \n", v);
}
}
Asymptotically, your algorithm is as good as it can get.
In practice, I'd add a check to loop over the smaller of the two sets and do lookups in the larger one. Assuming reasonably evenly distributed hashes, a lookup in a std::unoredered_set takes constant time. So this way, you'll be performing fewer such lookups.
You can do it with std::copy_if()
std::copy_if(a.begin(), a.end(), std::inserter(c, c.begin()), [b](const int element){return b.count(element) > 0;} );
Your algorithm is as good as it gets for a unordered set. however if you use a std::set (which uses a binary tree as storage) or even better a sorted std::vector, you can do better. The algorithm should be something like:
get iterators to a.begin() and b.begin()
if the iterators point to equal element add to intersection and increment both iterators.
Otherwise increment the iterator pointing to the smallest value
Go to 2.
Both should be O(n) time but using a normal set should save you from calculating hashes or any performance degradation that arises from hash collisions.
Thanks Angew, why your method is faster? Could you elaborate a bit more?
Well, let me provide you some additional info...
It should be pretty clear that, whichever data structures you use, you will have to iterate over all elements in at least one of those, so you cannot get better than O(n), n being the number of elements in the data structure selected to iterate over. Elementary now is, how fast you can look up the elements in the other structure – with a hash set, which std::unordered_set actually is, this is O(1) – at least if the number of collisions is small enough ("reasonably evenly distributed hashes"); the degenerate case would be all values having the same key...
So far, you get O(n) * O(1) = O(n). But you still the choice: O(n) or O(m), if m is the number of elements in the other set. OK, in complexity calculations, this is the same, we have a linear algorithm anyway, in practice, though, you can spare some hash calculations and look-ups if you choose the set with the lower number of elements...

how can i create histogram of vector<vector<long>>

I have allocated vector<vector<long>>. what is the right way to create histogram or use std::find over all vectors without relocation of the data ?
thanks
By histogram I understand a map value->occurrences, and with your data this means a map<int, int> and I do not understand how std::find kicks in. Said this I would go for something like this:
// assuming exists vector<vector<long<long>>
std::map<long, int> histogram ;
for (const auto &v1 : vect)
for (auto value : v1)
{
auto it = histogram.find(value) ;
if (it == histogram.end())
histogram[value] = 1 ;
else
it->second++ ;
}
Based on the comments, what you need is a way to gather all the values in the vectors you are collecting at runtime and keep track of how many of each there are. Luckily there are several standard algorithms and containers that can handle this task efficiently.
std::unordered_map<long,unsigned int> histogram;
std::for_each(data.begin(), data.end(), [&histogram=histogram](std::vector<long>& inner_vec)
{
for(long val : inner_vec)
{
++histogram[val];
}
});
You did mention that the amount of vectors you're dealing with could be large. This problem could quit easily lend itself to a multithreaded solution.
An initial attempt may be to have each thread have its own std::unordered_map<long,unsigned int> and track the count for a subset of the larger dataset,then merge the results back into a larger histogram whenever all the threads finish.
Here's a live demo demonstrating each of the methods described above. You should compile all of these with optimization settings to truly get a sense as to how fast this can work out for you and measure to see if the multithreaded solution even offers benefit.

Swap columns in c++

I have an std matrix defined as:
std::vector<std::vector<double> > Qe(6,std::vector<double>(6));
and a vector v that is:
v{0, 1, 3, 2, 4, 5};
I would like to swap the columns 3 and 2 of matrix Qe like indicated in vector v.
In Matlab this is as easy as writing Qe=Qe(:,v);
I wonder if there is an easy way other than a for loop to do this in c++.
Thanks in advance.
Given that you've implemented this as a vector of vectors, you can use a simple swap:
std::swap(Qe[2], Qe[3]);
This should have constant complexity. Of course, this will depend on whether you're treating your data as column-major or row-major. If you're going to be swapping columns often, however, you'll want to arrange the data to suit that (i.e., to allow the code above to work).
As far as doing the job without a for loop when you're using row-major ordering (the usual for C++), you can technically eliminate the for loop (at least from your source code) by using a standard algorithm instead:
std::for_each(Qe.begin(), Qe.end(), [](std::vector<double> &v) {std::swap(v[2], v[3]); });
This doesn't really change what's actually happening though--it just hides the for loop itself inside a standard algorithm. In this case, I'd probably prefer a range-based for loop:
for (auto &v : Qe)
std::swap(v[2], v[3]);
...but I've never been particularly fond of std::for_each, and when C++11 added range-based for loops, I think that was a superior alternative to the vast majority of cases where std::for_each might previously have been a reasonable possibility (IOW, I've never seen much use for std::for_each, and see almost none now).
Depends on how you implement your matrix.
If you have a vector of columns, you can swap the column references. O(1)
If you have a vector of rows, you need to swap the elements inside each row using a for loop. O(n)
std::vector<std::vector<double>> can be used as a matrix but you also need to define for yourself whether it is a vector of columns or vector of rows.
You can create a function for this so you don't write a for loop each time. For example, you can write a function which receives a matrix which is a vector of columns and a reordering vector (like v) and based on the reordering vector you create a new matrix.
//untested code and inefficient, just an example:
vector<vector<double>> ReorderColumns(vector<vector<double>> A, vector<int> order)
{
vector<vector<double>> B;
for (int i=0; i<order.size(); i++)
{
B[i] = A[order[i]];
}
return B;
}
Edit: If you want to do linear algebra there are libraries that can help you, you don't need to write everything yourself. There are math libraries for other purposes too.
If you are in a row scenario. The following would probably work:
// To be tested
std::vector<std::vector<double> >::iterator it;
for (it = Qe.begin(); it != Qe.end(); ++it)
{
std::swap((it->second)[2], (it->second)[3]);
}
In this scenario I don't see any other solution that would avoid doing a loop O(n).

How to remove almost duplicates from a vector in C++

I have an std::vector of floats that I want to not contain duplicates but the math that populates the vector isn't 100% precise. The vector has values that differ by a few hundredths but should be treated as the same point. For example here's some values in one of them:
...
X: -43.094505
X: -43.094501
X: -43.094498
...
What would be the best/most efficient way to remove duplicates from a vector like this.
First sort your vector using std::sort. Then use std::unique with a custom predicate to remove the duplicates.
std::unique(v.begin(), v.end(),
[](double l, double r) { return std::abs(l - r) < 0.01; });
// treats any numbers that differ by less than 0.01 as equal
Live demo
Sorting is always a good first step. Use std::sort().
Remove not sufficiently unique elements: std::unique().
Last step, call resize() and maybe also shrink_to_fit().
If you want to preserve the order, do the previous 3 steps on a copy (omit shrinking though).
Then use std::remove_if with a lambda, checking for existence of the element in the copy (binary search) (don't forget to remove it if found), and only retain elements if found in the copy.
I say std::sort() it, then go through it one by one and remove the values within certain margin.
You can have a separate write iterator to the same vector and one resize operation at the end - instead of calling erase() for each removed element or having another destination copy for increased performance and smaller memory usage.
If your vector cannot contain duplicates, it may be more appropriate to use an std::set. You can then use a custom comparison object to consider small changes as being inconsequential.
Hi you could comprare like this
bool isAlmostEquals(const double &f1, const double &f2)
{
double allowedDif = xxxx;
return (abs(f1 - f2) <= allowedDif);
}
but it depends of your compare range and the double precision is not on your side
if your vector is sorted you could use std::unique with the function as predicate
I would do the following:
Create a set<double>
go through your vector in a loop or using a functor
Round each element and insert into the set
Then you can swap your vector with an empty vector
Copy all elements from the set to the empty vector
The complexity of this approach will be n * log(n) but it's simpler and can be done in a few lines of code. The memory consumption will double from just storing the vector. In addition set consumes slightly more memory per each element than vector. However, you will destroy it after using.
std::vector<double> v;
v.push_back(-43.094505);
v.push_back(-43.094501);
v.push_back(-43.094498);
v.push_back(-45.093435);
std::set<double> s;
std::vector<double>::const_iterator it = v.begin();
for(;it != v.end(); ++it)
s.insert(floor(*it));
v.swap(std::vector<double>());
v.resize(s.size());
std::copy(s.begin(), s.end(), v.begin());
The problem with most answers so far is that you have an unusual "equality". If A and B are similar but not identical, you want to treat them as equal. Basically, A and A+epsilon still compare as equal, but A+2*epsilon does not (for some unspecified epsilon). Or, depending on your algorithm, A*(1+epsilon) does and A*(1+2*epsilon) does not.
That does mean that A+epsilon compares equal to A+2*epsilon. Thus A = B and B = C does not imply A = C. This breaks common assumptions in <algorithm>.
You can still sort the values, that is a sane thing to do. But you have to consider what to do with a long range of similar values in the result. If the range is long enough, the difference between the first and last can still be large. There's no simple answer.