I have the following code where AMAT is currrently a dense matrix. However most of the elements are zero such that essentially it is a sparse matrix. I understand block operations are not supported in Eigen sparse matrix. Wondering how I can rewrite this code if I replace AMAT as a sparse matrix. BMAT is a 9x9 dense matrix and every 3x3 block of BMAT is added to specific blocks in AMAT. BMAT is calculated outside this loop.
for(j=0;j<5000;j++) {
id1 = ids(0,j);
id2 = ids(1,j);
id3 = ids(2,j);
AMAT.block<3,3>(id1*3,id1*3) = AMAT.block<3,3>(id1*3,id1*3) + BMAT.block<3,3>(0,0);
AMAT.block<3,3>(id1*3,id2*3) = AMAT.block<3,3>(id1*3,id2*3) + BMAT.block<3,3>(0,3);
AMAT.block<3,3>(id1*3,id3*3) = AMAT.block<3,3>(id1*3,id3*3) + BMAT.block<3,3>(0,6);
AMAT.block<3,3>(id2*3,id1*3) = AMAT.block<3,3>(id2*3,id1*3) + BMAT.block<3,3>(3,0);
AMAT.block<3,3>(id2*3,id2*3) = AMAT.block<3,3>(id2*3,id2*3) + BMAT.block<3,3>(3,3);
AMAT.block<3,3>(id2*3,id3*3) = AMAT.block<3,3>(id2*3,id3*3) + BMAT.block<3,3>(3,6);
AMAT.block<3,3>(id3*3,id1*3) = AMAT.block<3,3>(id3*3,id1*3) + BMAT.block<3,3>(6,0);
AMAT.block<3,3>(id3*3,id2*3) = AMAT.block<3,3>(id3*3,id2*3) + BMAT.block<3,3>(6,3);
AMAT.block<3,3>(id3*3,id3*3) = AMAT.block<3,3>(id3*3,id3*3) + BMAT.block<3,3>(6,6);
}
This could work (untested, and I don't know the actual types of your matrices). The idea is to write a custom iterator which provides the indices and values of every entry of AMAT and pass that to setFromTriplets (duplicate entries will be summed together). This will iterate twice through your index list, but unfortunately not exploit the block structure of AMAT. But it will execute in O(nnz) time.
#include <Eigen/SparseCore>
struct AMAT_constructor {
struct AMAT_iterator {
bool operator==(AMAT_iterator const& other) const {
return j == other.j && k == other.k;
}
bool operator!=(AMAT_iterator const& other) const {
return !(*this == other);
}
Eigen::Index operator-(AMAT_iterator const& other) const {
return (j - other.j) * 81 + k - other.k;
}
AMAT_iterator const* operator->() const { return this; }
AMAT_iterator const& operator*() const { return *this; }
float value() const { return BMAT(k); }
Eigen::Index row() const { return ids((k / 3) % 3, j) * 3 + k % 3; }
Eigen::Index col() const { return ids(k / 27, j) * 3 + (k / 9) % 3; }
AMAT_iterator& operator++() {
if (++k == 81) {
k = 0;
++j;
}
return *this;
}
Eigen::Index j, k;
Eigen::Matrix3Xi const& ids;
Eigen::Matrix<float, 9, 9> const& BMAT;
};
Eigen::Matrix3Xi const& ids;
Eigen::Matrix<float, 9, 9> const& BMAT;
AMAT_iterator begin() const { return AMAT_iterator{0, 0, ids, BMAT}; }
AMAT_iterator end() const { return AMAT_iterator{ids.cols(), 0, ids, BMAT}; }
};
// use it like this:
Eigen::SparseMatrix<float> foo(Eigen::Matrix3Xi const& ids,
Eigen::Matrix<float, 9, 9> const& BMAT,
Eigen::Index sizeA) {
Eigen::SparseMatrix<float> AMAT(sizeA, sizeA);
AMAT_constructor Ac{ids, BMAT};
AMAT.setFromTriplets(Ac.begin(), Ac.end());
return AMAT;
}
As I understand this tutorial in https://eigen.tuxfamily.org/dox/group__TutorialBlockOperations.html block-operations are possible. But you need to know the columns and rows at compile-time.
Related
I have several instances in my code, where I have a condition based on coefficients of 1xN arrays, and need to set whole columns of MxN arrays depending on these conditions. In my case, N is Eigen::Dynamic and M ranges from 2 to 4, but is a compile-time constant in each instance.
Here's a simple function illustrating what I mean, with a and b being the 1xN arrays which form the condition, c being a 2xN array with additional data, and res being an out-parameter, whose columns are always set as a whole:
#include <iostream>
#include <Eigen/Dense>
using namespace Eigen;
template<Index nRows>
using ArrayNXd = Array<double, nRows, Dynamic>;
using Array1Xd = ArrayNXd<1>;
using Array2Xd = ArrayNXd<2>;
using Array3Xd = ArrayNXd<3>;
void asFunction(
Array3Xd& res,
const Array1Xd& a, const Array1Xd& b, const Array2Xd& c
){
for (Index col{0}; col<a.cols(); ++col){
if ( a[col] > b[col] )
res.col(col) = Array3d{
a[col] + b[col],
(a[col] + b[col]) * c(0, col),
(a[col] - b[col]) * c(1, col)
};
else
res.col(col) = Array3d{
a[col] - b[col],
a[col] + b[col],
(a[col] + b[col]) * (a[col] - b[col])
};
}
}
int main(){
Array1Xd a (3), b(3);
Array2Xd c (2, 3);
a << 1, 2, 3;
b << 0, 1, 2;
c <<
0, 1, 2,
1, 2, 3;
Array3Xd res (3,3);
asFunction(res, a, b, c);
std::cout << "as function:\n" << res << "\n";
return 0;
}
Functions similar to this are used in a performance cricital section* of my code, and I feel like I'm leaving performance on the table, because using loops with Eigen types is typically not the optimal solution.
*yes, I profiled it.
I wrote the same function as a NullaryExpr, but that was a bit slower. I guess that makes sense, given the additional evaluations of the condition(s) and the branching for each row:
#include <iostream>
#include <Eigen/Dense>
using namespace Eigen;
template<Index nRows>
using ArrayNXd = Array<double, nRows, Dynamic>;
using Array1Xd = ArrayNXd<1>;
using Array2Xd = ArrayNXd<2>;
using Array3Xd = ArrayNXd<3>;
class MyFunctor
{
public:
using Scalar = double;
static constexpr Index
RowsAtCompileTime { 3 },
MaxRowsAtCompileTime { 3 },
ColsAtCompileTime { Dynamic },
MaxColsAtCompileTime { Dynamic };
using DenseType = Array<
Scalar , RowsAtCompileTime, ColsAtCompileTime,
ColMajor, MaxRowsAtCompileTime, MaxColsAtCompileTime
>;
private:
typename Array1Xd::Nested m_a;
typename Array1Xd::Nested m_b;
typename Array2Xd::Nested m_c;
public:
MyFunctor(
const Array1Xd& a,
const Array1Xd& b,
const Array2Xd& c
) : m_a {a}, m_b {b}, m_c{c}
{}
bool cond(Index col) const {
return m_a[col] > m_b[col];
}
Scalar func1(Index col) const {
return m_a[col] + m_b[col];
}
Scalar func2(Index col) const {
return m_a[col] - m_b[col];
}
Scalar func3(Index row, Index col) const {
switch(row){
case 0: return func1(col);
case 1: return func1(col) * m_c(0, col);
case 2: return func2(col) * m_c(1, col);
default: __builtin_unreachable();
}
}
Scalar func4(Index row, Index col) const {
switch (row){
case 0: return func2(col);
case 1: return func1(col);
case 2: return func1(col) / func2(col);
default: __builtin_unreachable();
}
}
Scalar operator() (Index row, Index col) const {
if ( cond(col) )
return func3(row, col);
else
return func4(row, col);
}
};
using MyReturnType = Eigen::CwiseNullaryOp<
MyFunctor, typename MyFunctor::DenseType
>;
MyReturnType asFunctor(
const Array1Xd& a,
const Array1Xd& b,
const Array2Xd& c
){
using DenseType = typename MyFunctor::DenseType;
return DenseType::NullaryExpr(
3, a.cols(),
MyFunctor(a, b, c)
);
}
int main(){
Array1Xd a (3), b(3);
Array2Xd c (2, 3);
a << 1, 2, 3;
b << 0, 1, 2;
c <<
0, 1, 2,
1, 2, 3;
std::cout << "as functor:\n" << asFunctor(a,b,c) << "\n";
return 0;
}
My question is: Is there a more efficient way of implementing logic similar to the above (evaluate scalar condition for each column of a matrix, return values for the whole column based on the condition) using the eigen library?
Note: using an expression would be slightly preferred, because I don't need to worry about memory allocation, out-parameters, etc., and the code can be written with scalars in mind, which makes it much more easily understandable.
Edit: Note2: I tried using <Condition>.template replicate<nRows,1>().select(..., ...) as well, but it was slower and harder to read.
You can use Eigen's select method, but it only works for scalars, so you have to loop along one dimension.
const auto condition = a > b;
res.row(0) = condition.select(a + b /*true*/, a - b /*false*/);
res.row(1) = condition.select((a + b) * c.row(0), a + b);
res.row(2) = condition.select((a - b) * c.row(1), (a + b) * (a - b));
Note that you are probably faster if you transpose all your arrays. Then iteration goes column by column which vectorized much better since Eigen is column-major.
so I only looked at this bit of code
for (Index col{0}; col<a.cols(); ++col){
if ( a[col] > b[col] )
res.col(col) = Array3d{
a[col] + b[col],
(a[col] + b[col]) * c(0, col),
(a[col] - b[col]) * c(1, col)
};
else
res.col(col) = Array3d{
a[col] - b[col],
a[col] + b[col],
(a[col] + b[col]) * (a[col] - b[col])
};
}
I suspect, but cannot prove, that those a[col] and b[col] are getting accessed every single time you call them. You might want to try making short temporaries for the values that you reuse. For example:
so I only looked at this bit of code
for (Index col{0}; col<a.cols(); ++col){
auto acol=a[col];
auto bcol=b[col];
auto apb=acol+bcol;
auto amb=acol-bcol;
if ( acol > bcol )
res.col(col) = Array3d{
apb,
(apb) * c(0, col),
(amb) * c(1, col)
};
else
res.col(col) = Array3d{
amb,
apb,
(apb) * (amb)
};
}
and yes I know this isn't exactly what you wanted. maybe it helps tho
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I want to add two bolean vectors
vector<bool> v1= {0,0,1}
vector<bool> v2= {1,0,1}
vector<bool> resultedVector = v1+v2
The answer should be:
resultedVector = {1,1,0};
Does anyone know, how to do in c++/c++11 ?
I want to increment every time given boolean vector by 1. And just want to use binary operations. Or could create bolean truth table of given number of variable.
To perform binary addition in C++, you can use the function described here:
Adding binary numbers in C++
I implemented the function from that link to fit your specifications like this:
std::vector<bool> add(const std::vector<bool>& a, const std::vector<bool>& b)
{
bool c;
std::vector<bool> result;
for(int i = 0; i < a.size() ; i++){
result.push_back(false);
result[i] = ((a[i] ^ b[i]) ^ c); // c is carry
c = ((a[i] & b[i]) | (a[i] & c)) | (b[i] & c);
}
return result;
}
This function takes two vectors of bools (and assumes they are the same size) and returns their result vector. Obviously this function doesn't handle overflow or numbers of different sizes. You can modify it yourself if you need those capabilities. Also, you seem to be talking about an overloaded operator for a bool vector, and you can do that by checking out operator overloading, but this logic will allow you to add two boolean numbers stored in vectors.
I'm not sure that I understand your question. Since this looks like homework and the point of the question seems to be operators overloading, here's an idea, not the complete answer:
#include <vector>
std::vector< bool > operator+( const std::vector<bool>& a, const std::vector<bool>& b )
{
std::vector< bool > r;
// your code goes here
return r;
}
int main()
{
std::vector< bool > a, b, c;
c = a + b;
return 0;
}
EDIT - one day later
Here's a solution to your increment problem (demo):
#include <iostream>
#include <vector>
// preinc - no grow on overflow
std::vector< bool >& operator++( std::vector<bool>& v )
{
for ( auto e : v )
if ( e = !e )
break;
return v;
}
// postinc - no grow on overflow
std::vector< bool > operator++( std::vector<bool>& v, int )
{
auto t { v };
operator++( v );
return t;
}
// insert
std::ostream& operator<<( std::ostream& os, const std::vector< bool > v )
{
for ( std::vector< bool >::const_reverse_iterator ci = v.rbegin(); ci != v.rend(); ++ci )
os << *ci ? '1' : '0';
return os;
}
int main()
{
std::vector< bool > b {0,0,0,0};
for ( int i = 0; i < 16; ++i )
{
std::cout << b << std::endl;
++b;
}
return 0;
}
Here's how you can use a stateful functor:
struct BitAdder {
bool carry_ = 0x0; // Range is [0, 1].
// Only accepts single bit values for a and b.
bool operator()(bool a, bool b) {
assert(a == (a & 0x1) && b == (b & 0x1));
char sum = a + b + carry_;
carry_ = (sum & 0x2) >> 1; // Keep in range.
return sum & 0x1;
}
};
// Code is more straightforward when bits are stored in reverse.
std::vector<bool> v = {0, 1, 1, 1, 0}; // To be interpreted as: 1110 (14).
std::vector<bool> w = {1, 0, 1, 1, 0}; // To be interpreted as: 1101 (13).
std::vector<bool> result = {0, 0, 0, 0, 0}; // Will become: 11011 (27).
assert(v.size() <= w.size()); // v and w can be iterated over together.
assert(v.size() <= result.size()); // There is enough space to store the bits.
assert(v[v.size() - 1] + w[v.size() - 1] < 2); // No overflow can happen.
std::transform(v.cbegin(), v.cend(), w.cbegin(), result.begin(), BitAdder());
std::cout << "want: 11011, got: ";
std::copy(result.crbegin(), result.crend(), std::ostream_iterator<bool>(std::cout));
std::cout << '\n';
Live Demo
I have a vector with digits of number, vector represents big integer in system with base 2^32. For example:
vector <unsigned> vec = {453860625, 469837947, 3503557200, 40}
This vector represent this big integer:
base = 2 ^ 32
3233755723588593872632005090577 = 40 * base ^ 3 + 3503557200 * base ^ 2 + 469837947 * base + 453860625
How to get this decimal representation in string?
Here is an inefficient way to do what you want, get a decimal string from a vector of word values representing an integer of arbitrary size.
I would have preferred to implement this as a class, for better encapsulation and so math operators could be added, but to better comply with the question, this is just a bunch of free functions for manipulating std::vector<unsigned> objects. This does use a typedef BiType as an alias for std::vector<unsigned> however.
Functions for doing the binary division make up most of this code. Much of it duplicates what can be done with std::bitset, but for bitsets of arbitrary size, as vectors of unsigned words. If you want to improve efficiency, plug in a division algorithm which does per-word operations, instead of per-bit. Also, the division code is general-purpose, when it is only ever used to divide by 10, so you could replace it with special-purpose division code.
The code generally assumes a vector of unsigned words and also that the base is the maximum unsigned value, plus one. I left a comment wherever things would go wrong for smaller bases or bases which are not a power of 2 (binary division requires base to be a power of 2).
Also, I only tested for 1 case, the one you gave in the OP -- and this is new, unverified code, so you might want to do some more testing. If you find a problem case, I'll be happy to fix the bug here.
#include <iostream>
#include <string>
#include <vector>
#include <algorithm>
namespace bigint {
using BiType = std::vector<unsigned>;
// cmp compares a with b, returning 1:a>b, 0:a==b, -1:a<b
int cmp(const BiType& a, const BiType& b) {
const auto max_size = std::max(a.size(), b.size());
for(auto i=max_size-1; i+1; --i) {
const auto wa = i < a.size() ? a[i] : 0;
const auto wb = i < b.size() ? b[i] : 0;
if(wa != wb) { return wa > wb ? 1 : -1; }
}
return 0;
}
bool is_zero(BiType& bi) {
for(auto w : bi) { if(w) return false; }
return true;
}
// canonize removes leading zero words
void canonize(BiType& bi) {
const auto size = bi.size();
if(!size || bi[size-1]) return;
for(auto i=size-2; i+1; --i) {
if(bi[i]) {
bi.resize(i + 1);
return;
}
}
bi.clear();
}
// subfrom subtracts b from a, modifying a
// a >= b must be guaranteed by caller
void subfrom(BiType& a, const BiType& b) {
unsigned borrow = 0;
for(std::size_t i=0; i<b.size(); ++i) {
if(b[i] || borrow) {
// TODO: handle error if i >= a.size()
const auto w = a[i] - b[i] - borrow;
// this relies on the automatic w = w (mod base),
// assuming unsigned max is base-1
// if this is not the case, w must be set to w % base here
borrow = w >= a[i];
a[i] = w;
}
}
for(auto i=b.size(); borrow; ++i) {
// TODO: handle error if i >= a.size()
borrow = !a[i];
--a[i];
// a[i] must be set modulo base here too
// (this is automatic when base is unsigned max + 1)
}
}
// binary division and its helpers: these require base to be a power of 2
// hi_bit_set is base/2
// the definition assumes CHAR_BIT == 8
const auto hi_bit_set = unsigned(1) << (sizeof(unsigned) * 8 - 1);
// shift_right_1 divides bi by 2, truncating any fraction
void shift_right_1(BiType& bi) {
unsigned carry = 0;
for(auto i=bi.size()-1; i+1; --i) {
const auto next_carry = (bi[i] & 1) ? hi_bit_set : 0;
bi[i] >>= 1;
bi[i] |= carry;
carry = next_carry;
}
// if carry is nonzero here, 1/2 was truncated from the result
canonize(bi);
}
// shift_left_1 multiplies bi by 2
void shift_left_1(BiType& bi) {
unsigned carry = 0;
for(std::size_t i=0; i<bi.size(); ++i) {
const unsigned next_carry = !!(bi[i] & hi_bit_set);
bi[i] <<= 1; // assumes high bit is lost, i.e. base is unsigned max + 1
bi[i] |= carry;
carry = next_carry;
}
if(carry) { bi.push_back(1); }
}
// sets an indexed bit in bi, growing the vector when required
void set_bit_at(BiType& bi, std::size_t index, bool set=true) {
std::size_t widx = index / (sizeof(unsigned) * 8);
std::size_t bidx = index % (sizeof(unsigned) * 8);
if(bi.size() < widx + 1) { bi.resize(widx + 1); }
if(set) { bi[widx] |= unsigned(1) << bidx; }
else { bi[widx] &= ~(unsigned(1) << bidx); }
}
// divide divides n by d, returning the result and leaving the remainder in n
// this is implemented using binary division
BiType divide(BiType& n, BiType d) {
if(is_zero(d)) {
// TODO: handle divide by zero
return {};
}
std::size_t shift = 0;
while(cmp(n, d) == 1) {
shift_left_1(d);
++shift;
}
BiType result;
do {
if(cmp(n, d) >= 0) {
set_bit_at(result, shift);
subfrom(n, d);
}
shift_right_1(d);
} while(shift--);
canonize(result);
canonize(n);
return result;
}
std::string get_decimal(BiType bi) {
std::string dec_string;
// repeat division by 10, using the remainder as a decimal digit
// this will build a string with digits in reverse order, so
// before returning, it will be reversed to correct this.
do {
const auto next_bi = divide(bi, {10});
const char digit_value = static_cast<char>(bi.size() ? bi[0] : 0);
dec_string.push_back('0' + digit_value);
bi = next_bi;
} while(!is_zero(bi));
std::reverse(dec_string.begin(), dec_string.end());
return dec_string;
}
}
int main() {
bigint::BiType my_big_int = {453860625, 469837947, 3503557200, 40};
auto dec_string = bigint::get_decimal(my_big_int);
std::cout << dec_string << '\n';
}
Output:
3233755723588593872632005090577
Eigen::VectorXd has an Scalar operator()(Index i) which returns the coefficient at the index i in the vector. However, since Eigen::VectorXd is a special type of an Eigen::Matrix, i.e. of type Eigen::Matrix<Scalar, Eigen::Dynamic, 1>;, there is also an Scalar operator()(Index i, Index j).
Question:
Can I assume that it is safe (i.e. no undefined behaviour) to use the second version if I set j to zero? In other words, is the code below OK?
Eigen::VectorXd v(4);
v << 1, 2, 3, 4;
std::cout << v(2, 0); // displays 3
It looks like it's OK, there are no failed assertions or warnings when compiled in debug mode with all warnings on, but I am not 100% sure.
It is safe as long as v is a column vector, whereas using v(i) works for both column and row vectors, e.g.:
template<typename T>
void foo(const T &v) {
v(2); // OK
v(2,0); // -> out of bounds runtime assertion
}
MatrixXd mat(10,10);
foo(mat.row(5));
I'll expound upon #ggaels answer. If you look at the operator() definitions in DenseCoeffsBase.h (I'm quoting 3.2.10) you'll see that they both call coeff (or coeffRef)
EIGEN_STRONG_INLINE CoeffReturnType operator()(Index row, Index col) const
{
eigen_assert(row >= 0 && row < rows()
&& col >= 0 && col < cols());
return derived().coeff(row, col);
}
EIGEN_STRONG_INLINE CoeffReturnType
operator()(Index index) const
{
eigen_assert(index >= 0 && index < size());
return derived().coeff(index);
}
Looking at the definitions of coeffRef in PlainObjectBase.h we see that the offset is calculated simply:
EIGEN_STRONG_INLINE Scalar& coeffRef(Index rowId, Index colId)
{
if(Flags & RowMajorBit)
return m_storage.data()[colId + rowId * m_storage.cols()];
else // column-major
return m_storage.data()[rowId + colId * m_storage.rows()];
}
EIGEN_STRONG_INLINE Scalar& coeffRef(Index index)
{
return m_storage.data()[index];
}
So in the case of a row vector, you would have to write v(0,2) to avoid possible assertions failures/out of bounds errors.
I have a std::vector<PLY> that holds a number of structs:
struct PLY {
int x;
int y;
int greyscale;
}
Some of the PLY's could be duplicates in terms of their position x and y but not necessarily in terms of their greyscale value. What is the best way to find those (position-) duplicates and replace them with a single PLY instace which has a greyscale value that represents the average greyscale of all duplicates?
E.g: PLY a{1,1,188} is a duplicate of PLY b{1,1,255}. Same (x,y) position possibly different greyscale.
Based on your description of Ply you need these operators:
auto operator==(const Ply& a, const Ply& b)
{
return a.x == b.x && a.y == b.y;
}
auto operator<(const Ply& a, const Ply& b)
{
// whenever you can be lazy!
return std::make_pair(a.x, a.y) < std::make_pair(b.x, b.y);
}
Very important: if the definition "Two Ply are identical if their x and y are identical" is not general valid, then defining comparator operators that ignore greyscale is a bad ideea. In that case you should define separate function objects or non-operator functions and pass them around to function.
There is a nice rule of thumb that a function should not have more than a loop. So instead of a nested 2 for loops, we define this helper function which computes the average of consecutive duplicates and also returns the end of the consecutive duplicates range:
// prereq: [begin, end) has at least one element
// i.e. begin != end
template <class It>
auto compute_average_duplicates(It begin, It end) -> std::pair<int, It>
// (sadly not C++17) concepts:
//requires requires(It i) { {*i} -> Ply; }
{
auto it = begin + 1;
int sum = begin->greyscale;
for (; it != end && *begin == *it; ++it) {
sum += it->greyscale;
}
// you might need rounding instead of truncation:
return std::make_pair(sum / std::distance(begin, it), it);
}
With this we can have our algorithm:
auto foo()
{
std::vector<Ply> v = {{1, 5, 10}, {2, 4, 6}, {1, 5, 2}};
std::sort(std::begin(v), std::end(v));
for (auto i = std::begin(v); i != std::end(v); ++i) {
decltype(i) j;
int average;
std::tie(average, j) = compute_average_duplicates(i, std::end(v));
// C++17 (coming soon in a compiler near you):
// auto [average, j] = compute_average_duplicates(i, std::end(v));
if (i + 1 == j)
continue;
i->greyscale = average;
v.erase(i + 1, j);
// std::vector::erase Invalidates iterators and references
// at or after the point of the erase
// which means i remains valid, and `++i` (from the for) is correct
}
}
You can apply lexicographical sorting first. During sorting you should take care of overflowing greyscale. With current approach you will have some roundoff error, but it will be small as i first sum and only then average.
In the second part you need to remove duplicates from the array. I used additional array of indices to copy every element not more than once. If you have some forbidden value for x, y or greyscale you can use it and thus get along without additional array.
struct PLY {
int x;
int y;
int greyscale;
};
int main()
{
struct comp
{
bool operator()(const PLY &a, const PLY &b) { return a.x != b.x ? a.x < b.x : a.y < b.y; }
};
vector<PLY> v{ {1,1,1}, {1,2,2}, {1,1,2}, {1,3,5}, {1,2,7} };
sort(begin(v), end(v), comp());
vector<bool> ind(v.size(), true);
int s = 0;
for (int i = 1; i < v.size(); ++i)
{
if (v[i].x == v[i - 1].x &&v[i].y == v[i - 1].y)
{
v[s].greyscale += v[i].greyscale;
ind[i] = false;
}
else
{
int d = i - s;
if (d != 1)
{
v[s].greyscale /= d;
}
s = i;
}
}
s = 0;
for (int i = 0; i < v.size(); ++i)
{
if (ind[i])
{
if (s != i)
{
v[s] = v[i];
}
++s;
}
}
v.resize(s);
}
So you need to check, is PLY a1 { 1,1,1 }; duplicates PLY a2 {2,2,1};
So simple method is to override operator == to check a1.x == a2.x and a1.y == a2.y. After you can write own function removeDuplicates(std::vector<PLU>& mPLY); which will use iterators of this vector, compare and remove. But better to use std::list if you want to remove from middle of array too frequently.