I have a vector<vector<int> > A; of size 44,000. Now I need to intersect 'A' with another vector: vector<int> B of size 400,000. The size of inner vectors of A i.e. vector is variable and is of maximum size of 9,000 elements for doing the same I am using the following code:
for(int i=0;i<44000;i++)
vector<int> intersect;
set_intersection(A[i].begin(),A[i].end(),B.begin(),B.end(),
std::back_inserter(intersect));
Is there some way by which I may make the code efficient. All the elements in vector A are sorted i.e. they are of the form ((0,1,4,5),(7,94,830,1000)), etc. That is, all elements of A[i]'s vector < all elements of A[j]'s vector if i<j.
EDIT: One of the solutions which I thought about is to merge all the A[i]'s together into another vector mergedB using:
vector<int> mergedB;
for(int i=0;i<44000;i++)
mergedB.insert(mergedB.begin(),mergedB.end(),A[i])
vector<int> intersect;
set_intersection(mergedB.begin(),mergedB.end(),B.begin(),B.end(),
std::back_inserter(intersect));
However, I am not getting the reason as to why am I getting almost same performance with both the codes. Can someone please help me understand this
As it happens, set_itersection is easy to write.
A fancy way would be to create a concatenating iterator, and go over each element of the lhs vector. But it is easier to write set_intersection manually.
template<class MetaIt, class FilterIt, class Sink>
void meta_intersect(MetaIt mb, MetaIt me, FilterIt b, FilterIt e, Sink sink) {
using std::begin; using std::end;
if (b==e) return;
while (mb != me) {
auto b2 = begin(*mb);
auto e2 = end(*mb);
if (b2==e2) {
++mb;
continue;
}
do {
if (*b2 < *b) {
++b2;
continue;
}
if (*b < *b2) {
++b;
if (b==e) return;
continue;
}
*sink = *b2;
++sink; ++b; ++b2;
if (b==e) return;
} while (b2 != e2);
++mb;
}
}
this does not copy elements, other than into the output vector. It assumes MetaIt is an iterator to containers, FilterIt is an iterator to a compatible container, and Sink is an output iterator.
I attempted to remove all redundant comparisons while keeping the code somewhat readable. There is one redundant check -- we check b!=e and then b==e in the single case where we run out of rhs contents. As this should only happen once, the cost to clarity isn't worth it.
You could possibly make the above more efficient with vectorization on modern hardware. I'm not an expert at that. Mixing vectorization with the meta-iteration is tricky.
Since your vectors are sorted, the simplest and fastest algorithm will be to
Set the current element of both vectors to the first value
Compare the both current elements. If equal you have an interection, so increment both vectors'
If not equal increment the vector with the smallest current element.
Goto 2.
Related
I am writing a function to take the intersection of two sorted vector<size_t>s named a and b. The function iterates through both vectors removing anything from a that is not also in b so that whatever remains in a is the intersection of the two. Code here:
void intersect(vector<size_t> &a, vector<size_t> &b) {
vector<size_t>::iterator aItr = a.begin();
vector<size_t>::iterator bItr = b.begin();
vector<size_t>::iterator aEnd = a.end();
vector<size_t>::iterator bEnd = b.end();
while(aItr != aEnd) {
while(*bItr < *aItr) {
bItr++;
if(bItr == bEnd) {
a.erase(aItr, aEnd);
return;
}
}
if (*aItr == *bItr) aItr++;
else aItr = a.erase(aItr, aItr+1);
}
}
I am getting a very bug. I am stepping the debugger and once it passes line 8 "while(*bItr < *aItr)" b seems to disappear. The debugger seems not to know that b even exists! When b comes back into existence after it goes back to the top of the loop it has now taken on the values of a!
This is the kind of behavior that I expect to see in dynamic memory error, but as you can see I am not managing any dynamic memory here. I am super confused and could really use some help.
Thanks in advance!
Well, perhaps you should first address a major issue with your code: iterator invalidation.
See: Iterator invalidation rules here on StackOverflow.
When you erase an element in a vector, iterators into that vector at the point of deletion and further on are not guaranteed to be valid. Your code, though, assumes such validity for aEnd (thanks #SidS).
I would guess either this is the reason for what you're seeing, or maybe it's your compiler optimization flags which can change the execution flow, the lifetimes of variables which are not necessary, etc.
Plus, as #KT. notes, your erases can be really expensive, making your algorithm potentially quadratic-time in the length of a.
You are making the assumption that b contains at least one element. To address that, you can add this prior to your first loop :
if (bItr == bEnd)
{
a.clear();
return;
}
Also, since you're erasing elements from a, aEnd will become invalid. Replace every use of aEnd with a.end().
std::set_intersection could do all of this for you :
void intersect(vector<size_t> &a, const vector<size_t> &b)
{
auto it = set_intersection(a.begin(), a.end(), b.begin(), b.end(), a.begin());
a.erase(it, a.end());
}
I have implemented a merge sort in c++ by using vectors as function
arguments instead of indices (start, end). However, I would love to
know if there is any trade off by doing this, in terms of speed
and space complexity
The code:
void mergeSort(std::vector<int> &array) {
if(array.size() == 1) return;
else {
const unsigned int len = array.size();
const int lo = floor((double)len/2);
const int hi = ceil((double)len/2);
std::vector<int> L(&array[0], &array[lo]);
std::vector<int> R(&array[lo], &array[len]);
mergeSort(L);
mergeSort(R);
merge(array, L, R);
}
return;
}
Creating new lists every call to merge sort might not be the way to go,
but this is how the merge sort function works. Also, how fast/slow is the:
std::vector<int> L(&array[0], &array[lo]);
The merge function then looks like:
void merge(
std::vector<int> &array,
std::vector<int> &L,
std::vector<int> &R
) {
std::vector<int>::iterator a = array.begin();
std::vector<int>::iterator l = L.begin();
std::vector<int>::iterator r = R.begin();
while(l != L.end() && r != R.end()) {
if (*l <= *r) {
*a = *l;
l++;
}
else {
*a = *r;
r++;
}
a++;
}
while (l != L.end()) {
*a = *l;
a++;
l++;
}
while (r != R.end()) {
*a = *r;
a++;
r++;
}
return;
}
Well, there is not need to create new space at each call of merge.
std::vector<int> L(&array[0], &array[lo]); will actually create space to accomodate lo elements and will perform lo copies as well.
You are never gonna use more then O(n) additional space for storing values. So, why don't you allocate a buffer large enough accommodate the
a copy of the whole vector upfront and make each recursive call operate on a specific portion of the data? this way you don't have to create new vectors at each call.
Plus, I will also encourage you to make the mergesort works on iterators instead on vector<int> only. An interface like the following should be enough.
template < typename Iterator, typename Compare>
void mergesort(Iterator s, Iterator e, Compare cmp);
On Github you can find a version of mergesort I have implemented a while ago. It should be enough I guess.
The only additional memory you need for merge sort is array of size n for merging any of two sorted vectors produced on any step of the algorithm. Obviously, your solution uses more. On first merge it allocates two vectors of n/2 length , then it will be four vectors of n/4 and so all, giving n * log(n) in total. That's slightly more than n.
The cost for allocating vector is generally linear from its length (if copying vector's elements can be done in O(1)), but you should remember that allocating memory on the heap is expensive operation if you did not use custom allocator. Allocating memory may issue a system call, which may use complex algorithms to find the continuous piece of memory to satisfy your requirements. It may even need to move already allocated pieces of memory somewhere else. So there's really no point in allocating memory many times if you can stick with just a single allocation.
How do I remove duplicates from a non sorted container (mainly vector) when I do not have the possibility to define operator< e.g. when I can only define a fuzzy compare function.
This answer using sort does not work since I cannot define a function for ordering the data.
template <typename T>
void removeDuplicatesComparable(T& cont){
for(auto iter=cont.begin();iter!=cont.end();++iter){
cont.erase(std::remove(boost::next(iter),cont.end(),*iter),cont.end());
}
}
This is O(n²) and should be quite localized concerning cache hits.
Is there a faster or at least neater solution?
Edit: On why I cannot use sets. I do geometric comparisons. An example could be this but I have other entities different from polygons as well.
bool match(SegPoly const& left,SegPoly const& right,double epsilon){
double const cLengthCompare = 0.1; //just an example
if(!isZero(left.getLength()- right.getLength(), cLengthCompare)) return false;
double const interArea =areaOfPolygon(left.intersected(right)); //this is a geometric intersection
if(!isZero(interArea-right.getArea(),epsilon)) return false;
else return true;
}
So for such comparisons I would not know how to formulate sorting or a neat hash function.
First, don't remove elements one at a time.
Next, use a hash table (or similar structure) to detect duplicates.
If you don't need to preserve order, then copy all elements into a hashset (this destroys duplicates), then recreate the vector using the values left in the hashset.
If you need to preserve order, then:
Set read and write iterators to the beginning of the vector.
Start moving the read iterator through, checking elements against a hashset or octtree or something that allows finding nearby elements quickly.
For each element that collides with one in the hashset/octtree, advance the read iterator only.
For elements that do not collide, move from read iterator to write iterator, copy to hashset/octtree, then advance both.
When read iterator reaches the end, call erase to truncate the vector at the write iterator position.
The key advantage of the octtree is that while it doesn't let you immediately determine whether there is something close enough to be a "duplicate", it allows you to test against only near neighbors, excluding most of your dataset. So your algorithm might be O(N lg N) or even O(N lg lg N) depending on the spatial distribution.
Again, if you don't care about the ordering, you can actually move survivors into the hashset/octtree and at the end move them back into the vector (compactly).
If you don't want to rewrite your code to prevent duplicates from being placed in the vector to begin with, you can do something like this:
std::vector<Type> myVector;
// fill in the vector's data
std::unordered_set<Type> mySet(myVector.begin(), myVector.end());
myVector.assign(mySet.begin(), mySet.end());
Which will be of O(2 * n) = O(n).
std::set (or std::unordered_set - which uses a hash instead of a comparison) doesn't allow for duplicates, so it will eliminate them as the set is initialized. Then you re-assign the vector with the non-duplicated data.
Since you are insisting that you cannot create a hash, another alternative is to create a temporary vector:
std::vector<Type> vec1;
// fill vec1 with your data
std::vector<Type> vec2;
vec2.reserve(vec1.size()); // vec1.size() will be the maximum possible size for vec2
std::for_each(vec1.begin(), vec1.end(), [&](const Type& t)
{
bool is_unique = true;
for (std::vector<Type>::iterator it = vec2.begin(); it != vec2.end(); ++it)
{
if (!YourCustomEqualityFunction(s, t))
{
is_unique = false;
break;
}
}
if (is_unique)
{
vec2.push_back(t);
}
});
vec1.swap(vec2);
If copies are a concern, switch to a vector of pointers, and you can decrease the memory reallocations:
std::vector<std::shared_ptr<Type>> vec1;
// fill vec1 with your data
std::vector<std::shared_ptr<Type>> vec2;
vec2.reserve(vec1.size()); // vec1.size() will be the maximum possible size for vec2
std::for_each(vec1.begin(), vec1.end(), [&](const std::shared_ptr<Type>& t)
{
bool is_unique = true;
for (std::vector<Type>::iterator it = vec2.begin(); it != vec2.end(); ++it)
{
if (!YourCustomEqualityFunction(*s, *t))
{
is_unique = false;
break;
}
}
if (is_unique)
{
vec2.push_back(t);
}
});
vec1.swap(vec2);
I need advice for micro optimization in C++ for a vector comparison function,
it compares two vectors for equality and order of elements does not matter.
template <class T>
static bool compareVectors(const vector<T> &a, const vector<T> &b)
{
int n = a.size();
std::vector<bool> free(n, true);
for (int i = 0; i < n; i++) {
bool matchFound = false;
for (int j = 0; j < n; j++) {
if (free[j] && a[i] == b[j]) {
matchFound = true;
free[j] = false;
break;
}
}
if (!matchFound) return false;
}
return true;
}
This function is used heavily and I am thinking of possible way to optimize it.
Can you please give me some suggestions? By the way I use C++11.
Thanks
It just realized that this code only does kind of a "set equivalency" check (and now I see that you actually did say that, what a lousy reader I am!). This can be achieved much simpler
template <class T>
static bool compareVectors(vector<T> a, vector<T> b)
{
std::sort(a.begin(), a.end());
std::sort(b.begin(), b.end());
return (a == b);
}
You'll need to include the header algorithm.
If your vectors are always of same size, you may want to add an assertion at the beginning of the method:
assert(a.size() == b.size());
This will be handy in debugging your program if you once perform this operation for unequal lengths by mistake.
Otherwise, the vectors can't be the same if they have unequal length, so just add
if ( a.size() != b.size() )
{
return false;
}
before the sort instructions. This will save you lots of time.
The complexity of this technically is O(n*log(n)) because it's mainly dependent on the sorting which (usually) is of that complexity. This is better than your O(n^2) approach, but might be worse due to the needed copies. This is irrelevant if your original vectors may be sorted.
If you want to stick with your approach, but tweak it, here are my thoughts on this:
You can use std::find for this:
template <class T>
static bool compareVectors(const vector<T> &a, const vector<T> &b)
{
const size_t n = a.size(); // make it const and unsigned!
std::vector<bool> free(n, true);
for ( size_t i = 0; i < n; ++i )
{
bool matchFound = false;
auto start = b.cbegin();
while ( true )
{
const auto position = std::find(start, b.cend(), a[i]);
if ( position == b.cend() )
{
break; // nothing found
}
const auto index = position - b.cbegin();
if ( free[index] )
{
// free pair found
free[index] = false;
matchFound = true;
break;
}
else
{
start = position + 1; // search in the rest
}
}
if ( !matchFound )
{
return false;
}
}
return true;
}
Another possibility is replacing the structure to store free positions. You may try a std::bitset or just store the used indices in a vector and check if a match isn't in that index-vector. If the outcome of this function is very often the same (so either mostly true or mostly false) you can optimize your data structures to reflect that. E.g. I'd use the list of used indices if the outcome is usually false since only a handful of indices might needed to be stored.
This method has the same complexity as your approach. Using std::find to search for things is sometimes better than a manual search. (E.g. if the data is sorted and the compiler knows about it, this can be a binary search).
Your can probabilistically compare two unsorted vectors (u,v) in O(n):
Calculate:
U= xor(h(u[0]), h(u[1]), ..., h(u[n-1]))
V= xor(h(v[0]), h(v[1]), ..., h(v[n-1]))
If U==V then the vectors are probably equal.
h(x) is any non-cryptographic hash function - such as MurmurHash. (Cryptographic functions would work as well but would usually be slower).
(This would work even without hashing, but it would be much less robust when the values have a relatively small range).
A 128-bit hash function would be good enough for many practical applications.
I am noticing that most proposed solution involved sorting booth of the input vectors.I think sorting the arrays compute more that what is strictly necessary for the evaluation the equality of the two vector ( and if the inputs vectors are constant, a copy needs to be made).
One other way would be to build an associative container to count the element in each vector... It's also possible to do the reduction of the two vector in parrallel.In the case of very large vector that could give a nice speed up.
template <typename T> bool compareVector(const std::vector<T> & vec1, const std::vector<T> & vec2) {
if (vec1.size() != vec2.size())
return false ;
//Here we assuame that T is hashable ...
auto count_set = std::unordered_map<T,int>();
//We count the element in each vector...
for (unsigned int count = 0 ; count < vec1.size();++count)
{
count_set[vec1[count]]++;
count_set[vec2[count]]--;
} ;
// If everything balance out we should have zero everywhere
return std::all_of(count_set.begin(),count_set.end(),[](const std::pair<T,int> p) { return p.second == 0 ;});
}
That way depend on the performance of your hashsing function , we might get linear complexity in the the length of booth vector (vs n*logn with the sorting).
NB the code might have some bug , did have time to check it ...
Benchmarking this way of comparing two vector to sort based comparison i get on ubuntu 13.10,vmware core i7 gen 3 :
Comparing 200 vectors of 500 elements by counting takes 0.184113 seconds
Comparing 200 vectors of 500 elements by sorting takes 0.276409 seconds
Comparing 200 vectors of 1000 elements by counting takes 0.359848 seconds
Comparing 200 vectors of 1000 elements by sorting takes 0.559436 seconds
Comparing 200 vectors of 5000 elements by counting takes 1.78584 seconds
Comparing 200 vectors of 5000 elements by sorting takes 2.97983 seconds
As others suggested, sorting your vectors beforehand will improve performance.
As an additional optimization you can make heaps out of the vectors to compare (with complexity O(n) instead of sorting with O(n*log(n)).
Afterwards you can pop elements from both heaps (complexity O(log(n))) until you get a mismatch.
This has the advantage that you only heapify instead of sort your vectors if they are not equal.
Below is a code sample. To know what is really fastest, you will have to measure with some sample data for your usecase.
#include <algorithm>
typedef std::vector<int> myvector;
bool compare(myvector& l, myvector& r)
{
bool possibly_equal=l.size()==r.size();
if(possibly_equal)
{
std::make_heap(l.begin(),l.end());
std::make_heap(r.begin(),r.end());
for(int i=l.size();i!=0;--i)
{
possibly_equal=l.front()==r.front();
if(!possibly_equal)
break;
std::pop_heap(l.begin(),l.begin()+i);
std::pop_heap(r.begin(),r.begin()+i);
}
}
return possibly_equal;
}
If you use this function a lot on the same vectors, it might be better to keep sorted copies for comparison.
In theory it might even be better to sort the vectors and compare sorted vectors if each one is compared just once, (sorting is O(n*log(n)), comparing sorted vector O(n), while your function is O(n^2).
But I suppose the time spent allocating memory for the sorted vectors will dwarf any theoretical gains if you don't compare the same vectors often.
As with all optimisations, profiling is the only way to make sure, I'd try some std::sort / std::equal combo.
Like stefan says you need to sort to get better complexity.
Then you can use
== operator (tnx for the correction in the comments - ste equal will also work but it is more appropriate for comparing ranges not entire containers)
If that is not fast enough only then bother with microoptimization.
Also are vectors guaranteed to be of the same size?
If not put that check at the begining.
Another possible solution (viable only if all elements are unique), which should improve somewhat the solution of #stefan (although the complexity would remain in O(NlogN)) is this:
template <class T>
static bool compareVectors(vector<T> a, const vector<T> & b)
{
// You should probably check this outside as it can
// avoid you the copy of a
if (a.size() != b.size()) return false;
std::sort(a.begin(), a.end());
for (const auto & v : b)
if ( !std::binary_search(a.begin(), a.end(), v) ) return false;
return true;
}
This should be faster since it performs the search directly as an O(NlogN) operation, instead of sorting b (O(NlogN)) and then searching both vectors (O(N)).
Imagine you have an std::list with a set of values in it. For demonstration's sake, we'll say it's just std::list<int>, but in my case they're actually 2D points. Anyway, I want to remove one of a pair of ints (or points) which satisfy some sort of distance criterion. My question is how to approach this as an iteration that doesn't do more than O(N^2) operations.
Example
Source is a list of ints containing:
{ 16, 2, 5, 10, 15, 1, 20 }
If I gave this a distance criterion of 1 (i.e. no item in the list should be within 1 of any other), I'd like to produce the following output:
{ 16, 2, 5, 10, 20 } if I iterated forward or
{ 20, 1, 15, 10, 5 } if I iterated backward
I feel that there must be some awesome way to do this, but I'm stuck with this double loop of iterators and trying to erase items while iterating through the list.
Make a map of "regions", basically, a std::map<coordinates/len, std::vector<point>>.
Add each point to it's region, and each of the 8 neighboring regions O(N*logN). Run the "nieve" algorithm on each of these smaller lists (technically O(N^2) unless theres a maximum density, then it becomes O(N*density)). Finally: On your origional list, iterate through each point, and if it has been removed from any of the 8 mini-lists it was put in, remove it from the list. O(n)
With no limit on density, this is O(N^2), and slow. But this gets faster and faster the more spread out the points are. If the points are somewhat evenly distributed in a known boundary, you can switch to a two dimensional array, making this significantly faster, and if there's a constant limit to the density, that technically makes this a O(N) algorithm.
That is how you sort a list of two variables by the way. The grid/map/2dvector thing.
[EDIT] You mentioned you were having trouble with the "nieve" method too, so here's that:
template<class iterator, class criterion>
iterator RemoveCriterion(iterator begin, iterator end, criterion criter) {
iterator actend = end;
for(iterator L=begin; L != actend; ++L) {
iterator R(L);
for(++R; R != actend;) {
if (criter(*L, *R) {
iterator N(R);
std::rotate(R, ++N, actend);
--actend;
} else
++R;
}
}
return actend;
}
This should work on linked lists, vectors, and similar containers, and works in reverse. Unfortunately, it's kinda slow due to not taking into account the properties of linked lists. It's possible to make much faster versions that only work on linked lists in a specific direction. Note that the return value is important, like with the other mutating algorithms. It can only alter contents of the container, not the container itself, so you'll have to erase all elements after the return value when it finishes.
Cubbi had the best answer, though he deleted it for some reason:
Sounds like it's a sorted list, in which case std::unique will do the job of removing the second element of each pair:
#include <list>
#include <algorithm>
#include <iostream>
#include <iterator>
int main()
{
std::list<int> data = {1,2,5,10,15,16,20};
std::unique_copy(data.begin(), data.end(),
std::ostream_iterator<int>(std::cout, " "),
[](int n, int m){return abs(n-m)<=1;});
std::cout << '\n';
}
demo: https://ideone.com/OnGxk
That trivially extends to other types -- either by changing int to something else, or by defining a template:
template<typename T> void remove_close(std::list<T> &data, int distance)
{
std::unique_copy(data.begin(), data.end(),
std::ostream_iterator<int>(std::cout, " "),
[distance](T n, T m){return abs(n-m)<=distance;});
return data;
}
Which will work for any type that defines operator - and abs to allow finding a distance between two objects.
As a mathematician I am pretty sure there is no 'awesome' way to approaching this problem for an unsorted list. It seems to me that it is a logical necessity to check the criterion for any one element against all previous elements selected in order to determine whether insertion is viable or not. There may be a number of ways to optimize this, depending on the size of the list and the criterion.
Perhaps you could maintain a bitset based on the criterion. E.g. suppose abs(n-m)<1) is the criterion. Suppose the first element is of size 5. This is carried over into the new list. So flip bitset[5] to 1. Then, when you encounter an element of size 6, say, you need only test
!( bitset[5] | bitset[6] | bitset[7])
This would ensure no element is within magnitude 1 of the resulting list. This idea may be difficult to extend for more complicated(non discrete) criterions however.
What about:
struct IsNeighbour : public std::binary_function<int,int,bool>
{
IsNeighbour(int dist)
: distance(dist) {}
bool operator()(int a, int b) const
{ return abs(a-b) <= distance; }
int distance;
};
std::list<int>::iterator iter = lst.begin();
while(iter != lst.end())
{
iter = std::adjacent_find(iter, lst.end(), IsNeighbour(some_distance)));
if(iter != lst.end())
iter = lst.erase(iter);
}
This should have O(n). It searches for the first pair of neighbours (which are at maximum some_distance away from each other) and removes the first of this pair. This is repeated (starting from the found item and not from the beginning, of course) until no pairs are found anymore.
EDIT: Oh sorry, you said any other and not just its next element. In this case the above algorithm only works for a sorted list. So you should sort it first, if neccessary.
You can also use std::unique instead of this custom loop above:
lst.erase(std::unique(lst.begin(), lst.end(), IsNeighbour(some_distance), lst.end());
but this removes the second item of each equal pair, and not the first, so you may have to reverse the iteration direction if this matters.
For 2D points instead of ints (1D points) it is not that easy, as you cannot just sort them by their euclidean distance. So if your real problem is to do it on 2D points, you might rephrase the question to point that out more clearly and remove the oversimplified int example.
I think this will work, as long as you don't mind making copies of the data, but if it's just a pair of integer/floats, that should be pretty low-cost. You're making n^2 comparisons, but you're using std::algorithm and can declare the input vector const.
//calculates the distance between two points and returns true if said distance is
//under its threshold
bool isTooClose(const Point& lhs, const Point& rhs, int threshold = 1);
vector<Point>& vec; //the original vector, passed in
vector<Point>& out; //the output vector, returned however you like
for(b = vec.begin(), e = vec.end(); b != e; b++) {
Point& candidate = *b;
if(find_if(out.begin(),
out.end(),
bind1st(isTooClose, candidate)) == out.end())
{//we didn't find anyone too close to us in the output vector. Let's add!
out.push_back(candidate);
}
}
std::list<>.erase(remove_if(...)) using functors
http://en.wikipedia.org/wiki/Erase-remove_idiom
Update(added code):
struct IsNeighbour : public std::unary_function<int,bool>
{
IsNeighbour(int dist)
: m_distance(dist), m_old_value(0){}
bool operator()(int a)
{
bool result = abs(a-m_old_value) <= m_distance;
m_old_value = a;
return result;
}
int m_distance;
int m_old_value;
};
main function...
std::list<int> data = {1,2,5,10,15,16,20};
data.erase(std::remove_if(data.begin(), data.end(), IsNeighbour(1)), data.end());