In D-dimensional space given a two simplicial (say, 2-dimensional triangle faces in D3 space for tetrahedron) adjacent facets V (visible) and H (horizon), defined by two arrays of D D-dimensional points PV and PH. The orders of elements in above arrays are strictly defined and, in turn, define an orientations of facets in space. Say, theirs indexes in universal set of the points U (which involved in geometric calculations) presented as two std::list< std::size_t >s. The ridge is D - 1 dimensional boundary element of facet (say, 1-dimensional edges of tetrahedron in D3 space). To define which points are common for both facets I simply can do the following:
point_list visible_set_ = visible_facet_.vertices_;
point_list horizon_set_ = horizon_facet_.vertices_;
visible_set_.sort();
horizon_set_.sort();
point_list ridge_;
std::set_intersection(visible_set_.cbegin(), visible_set_.cend(),
horizon_set_.cbegin(), horizon_set_.cend(),
std::back_inserter(ridge_));
But during std::sort execution I lose an information about codirectionality of ridge R, defined as ridge_ above, and the same ridge of any of both facets.
The codirectionality can be defined afterwards by means of calculation of swaps number, which minimally needed to perform a permutation from 1.) array of the points of the ridge in order as it presented in given array of the points of the facet of interest to 2.) produced array of the points of the ridge R itself. But I sure that there is overhead here.
Another way to defining the codirectionality is to calculate an oriented square of a two facets (one constructed from exclusive point (difference of the facet and the ridge) and then the ridge and one produced by simple modification of the facet: moveing exclusive point to the front, as it located in the first of the two facets).
How to perform an intersection of two unsorted arrays with fixed order of elements so, that the result array saves the order of elements as it presented in first (xor second) array. Is there such algorithm, having time complexity less than O(n2)? Especially interested the STL-aided implementation possibility.
If I understand the problem correctly, you can use the following scheme. First, make the copies of your original arrays (call them visible_set_for_sorting and horizon_set_for_sorting). Then sort them. Then form the intersection in the following way:
std::set<int> intersection;
std::set_intersection(
visible_set_for_sorting.begin(), visible_set_for_sorting.end(),
horizon_set_for_sorting.begin(), horizon_set_for_sorting.end(),
std::inserter(intersection, intersection.begin()));
Now you can iterate any original array (visible_set_ or horizon_set_), check whether the point is in intersection and form the resulting list in the needed order.
std::list<int> list;
for (int p : visible_set_)
{
if (intersection.find(p) != intersection.end())
{
list.push_back(p);
}
}
Complexity shouldn't be higher than O(N*log(N)).
My version replaces the exclusive point by furthest point keeping its order as in original visible facet. Newfacet (in terms of original qhull implementation) created as result:
point_set horizon_(horizon_facet_.vertices_.cbegin(),
horizon_facet_.vertices_.cend()); // n * log(n) +
auto const hend = horizon_.end();
point_list ridge_;
for (size_type const p : vertices_) { // n *
auto const h = horizon_.find(p); // (log(n) +
if (h == hend) {
ridge_.push_back(apex);
} else {
ridge_.push_back(p);
horizon_.erase(h); // const)
}
}
Related
Do you remember my prior question: What is causing data race in std::async here?
Even though I successfully parallelized this program, it still ran too slowly to be practical.
So I tried to improve the data structure representing a Conway's Game of Life pattern.
Brief explanation of the new structure:
class pattern {
// NDos::Lifecell represents a cell by x and y coordinates.
// NDos::Lifecell is equality comparable, and has std::hash specialization.
private:
std::unordered_map<NDos::Lifecell, std::pair<int, bool>> cells_coor;
std::unordered_set<decltype(cells_coor)::const_iterator> cells_neigh[9];
std::unordered_set<decltype(cells_coor)::const_iterator> cells_onoff[2];
public:
void insert(int x, int y) {
// if coordinate (x,y) isn't already ON,
// turns it ON and increases the neighbor's neighbor count by 1.
}
void erase(int x, int y) {
// if coordinate (x,y) isn't already OFF,
// turns it OFF and decreases the neighbor's neighbor count by 1.
}
pattern generate(NDos::Liferule rule) {
// this advances the generation by 1, according to the rule.
// (For example here, B3/S23)
pattern result;
// inserts every ON cell with 3 neighbors to result.
// inserts every OFF cell with 2 or 3 neighbors to result.
return result;
}
// etc...
};
In brief, pattern contains the cells. It contains every ON cells, and every OFF cells that has 1 or more ON neighbor cells. It can also contain spare OFF cells.
cells_coor directly stores the cells, by using their coordinates as keys, and maps them to their number of ON neighbor cells (stored as int) and whether they are ON (stored as bool).
cells_neigh and cells_onoff indirectly stores the cells, by the iterators to them as keys.
The number of ON neighbor of a cell is always 0 or greater and 8 or less, so cells_neigh is a size 9 array.
cells_neigh[0] stores the cells with 0 ON neighbor cells, cells_neigh[1] stores the cells with 1 ON neighbor cell, and so on.
Likewise, a cell is always either OFF or ON, so cells_onoff is a size 2 array.
cells_onoff[false] stores the OFF cells, and cells_onoff[true] stores the ON cells.
Cells must be inserted to or erased from all of cells_coor, cells_neigh and cells_onoff. In other words, if a cell is inserted to or erased from one of them, it must be so also for the others. Because of this, the elements of cells_neigh and cells_onoff is std::unordered_set storing the iterators to the actual cells, enabling fast access to the cells by a neighbor count or OFF/ON state.
If this structure works, the insertion function will have average time complexity of O(1), the erasure also O(1), and the generation O(cells_coor.size()), which are great improval of time complexity from the prior structure.
But as you see, there is a problem: How can I hash a std::unordered_map::const_iterator?
std::hash prohibits a specialization for them, so I have to make a custom one.
Taking their address won't work, as they are usually acquired as rvalues or temporaries.
Dereferencing them also won't work, as there are multiple cells that have 0 ON neighbor cells, or multiple cells that is OFF, etc.
So what can I do? If I can't do anything, cells_neigh and cells_onoff will be std::vector or something, sharply degrading the time complexity.
Short story: this won't work (really well)(*1). Most of the operations that you're likely going to perform on the map cells_coor will invalidate any iterators (but not pointers, as I learned) to its elements.
If you want to keep what I'd call different "views" on some collection, then the underlying container storing the actual data needs to be either not modified or must not invalidate its iterators (a linked list for example).
Perhaps I'm missing something, but why not keep 9 sets of cells for the neighbor counts and 2 sets of cells for on/off? (*2) Put differently: for what do you really need that map? (*3)
(*1): The map only invalidates pointers and iterators when rehashing occurs. You can check for that:
// Before inserting
(map.max_load_factor() * map.bucket_count()) > (map.size() + 1)
(*2): 9 sets can be reduced to 8: if a cell (x, y) is in none of the 8 sets, then it would be in the 9th set. Thus storing that information is unnecessary. Same for on/off: it's enough to store cells that are on. All other are off.
(*3): Accessing the number of neighbours without using the map but only with sets of cells, kind of pseudo code:
unsigned number_of_neighbours(Cell const & cell) {
for (unsigned neighbours = 9; neighbours > 0; --neighbours) {
if (set_of_cells_with_neighbours(neighbours).count() == 1) {
return neighbours;
}
}
return 0;
}
The repeated lookups in the sets could of course destroy actual performance, you'd need to profile that. (Asymptotic runtime is unaffected)
I am in a lost. I have been trying to implement this code at:http://www.blackpawn.com/texts/pointinpoly/default.html
However, I don't know how is it possible that the cross-product present there between two 2D vectors can result also in a 2D vector. It does not make sense to me. That is also present in some examples of intersection between polygons and lines, in the fine book "Realtime Collision Detection" - where even scalar triples between 2D vectors appear in the codes (see page 189, for instance).
The issue is that, as far as I can think of it, the pseudo cross-product of two 2D vectors can only result in a scalar (v1.xv2.y-v1.yv2.x) or at most in a 3D vector if one adds two zeros, since that scalar represents the Z dimension. But how can it result in a 2D vector?
I am not the first one to ask this and, coincidently, when trying to use the same code example: Cross product of 2 2D vectors However, as can be easily seen, the answer, the original question when updated and the comments in that thread ended up being quite a mess, if I dare say so.
Does anyone know how should I get these 2D vectors from the cross-product of two 2D vectors? If code is to be provided, I can handle C#, JavaScript and some C++.
EDIT - here is a piece of the code in the book as I mentioned above:
int IntersectLineQuad(Point p, Point q, Point a, Point b, Point c, Point d, Point &r)
{
Vector pq = q - p;
Vector pa = a - p;
Vector pb = b - p;
Vector pc = c - p;
// Determine which triangle to test against by testing against diagonal first
Vector m = Cross(pc, pq);
float v = Dot(pa, m); // ScalarTriple(pq, pa, pc);
if (v >= 0.0f) {
// Test intersection against triangle abc
float u = -Dot(pb, m); // ScalarTriple(pq, pc, pb);
if (u < 0.0f) return 0;
float w = ScalarTriple(pq, pb, pa);
....
For the page you linked, it seems that they talk about a triangle in a 3d space:
Because the triangle can be oriented in any way in 3d-space, ...
Hence all the vectors they talk about are 3d vectors, and all the text and code makes perfect sense. Note that even for a 2d vectors everything also makes sense, if you consider a cross product to be a 3d vector pointing out of screen. And they mention it on the page too:
If you take the cross product of [B-A] and [p-A], you'll get a vector pointing out of the screen.
Their code is correct too, both for 2d and 3d cases:
function SameSide(p1,p2, a,b)
cp1 = CrossProduct(b-a, p1-a)
cp2 = CrossProduct(b-a, p2-a)
if DotProduct(cp1, cp2) >= 0 then return true
else return false
For 2d, both cp1 and cp2 are vectors pointing out of screen, and the (3d) dot product is exactly what you need to check; checking just the product of corresponding Z components is the same. If everything is 3d, this is also correct. (Though I would write simply return DotProduct(cp1, cp2) >= 0.)
For int IntersectLineQuad(), I can guess that the situation is the same: the Quad, whatever it is, is a 3d object, as well as Vector and Point in code. However, if you add more details about what is this function supposed to do, this will help.
In fact, it is obvious that any problem stated in 2d can be extended to 3d, and so any approach which is valid in 3d will also be valid for 2d case too, you just need to imagine a third axis pointing out of screen. So I think this is a valid (though confusing) technique to describe a 2d problem completely in 3d terms. You might yourself doing some extra work, because some values will always be zero in such an approach, but in turn the (almost) same code will work in a general 3d case too.
Can someone explain meaning of this paragraph
The great advantage of pairs is that they have built-in operations to compare themselves. Pairs are compared first-to-second element. If the first elements are not equal, the result will be based on the comparison of the first elements only; the second elements will be compared only if the first ones are equal. The array (or vector) of pairs can easily be sorted by STL internal functions.
and hence this
For example, if you want to sort the array of integer points so that they form a polygon, it’s a good idea to put them to the vector< pair<double, pair<int,int> >, where each element of vector is { polar angle, { x, y } }. One call to the STL sorting function will give you the desired order of points.
I have been struggling for an hour to understand this.
Source
Consider looking at operator< for pair<A,B>, which is a class that looks something like:
struct pairAB {
A a;
B b;
};
You could translate that paragraph directly into code:
bool operator<(const pairAB& lhs, const pairAB& rhs) {
if (lhs.a != rhs.a) { // If the first elements are not equal
return lhs.a < rhs.a; // the result will be based on
} // the comparison of the first elements only
return lhs.b < rhs.b; // the second elements will be compared
// only if the first ones are equal.
}
Or, thinking more abstractly, this is how lexicographic sort works. Think of how you would order two words. You'd compare their first letters - if they're different, you can stop and see which one is less. If they're the same, then you go onto the second letter.
The first paragraph says that pairs have an ordering as follows: if you have (x, y) and (z, w), and you compare them, then it will first check if x is smaller (or larger) than z: if yes, than the first pair is smaller (or larger) than the second. If x = z, however, then it will compare y and w. This makes it very convenient to do stuff like sorting a vector of pairs if the first elements of the pairs are more important to the order than the second elements.
The second paragraph gives an interesting application. Suppose you stand at some point on a plane, and there's a polygon enclosing you. Then each point will have an angle and a distance. But given the points, how do you know in what order should they be to form a polygon (without crisscrossing themselves)? If you store the points in this format (angle, distance), then you'll get the circling direction for free. That's actually rather neat.
The STL pair is a container to hold two objects together. Consider this for example,
pair a, b;
The first element can be accessed via a.first and the second via a.second.
The first paragraph is telling us that STL provides built-in operations to compare two pairs. For example, you need to compare 'a' and 'b', then the comparison is first done using a.first and b.first. If both the values are same, then the comparison is done using a.second and b.second. Since this is a built-in functionality, you can easily use it with the internal functions of STL like sort, b_search, etc.
The second paragraph is an example of how this might be used. Consider a situation where you would want to sort the points in a polygon. You would first want to sort them based on their polar angle, then the x co-ordinate and then the y co-ordinate. Thus we make use of the pair {angle, {x,y}}. So any comparison would be first done on the angle, then advanced to the x value and then the y value.
It will be easier to understand if to compare a simple example of pairs of last names and first names.
For example if you have pairs
{ Tomson, Ann }
{ Smith, Tony }
{ Smith, John }
and want to sort them in the ascending order you have to compare the pairs with each other.
If you compare the first two pairs
{ Tomson, Ann }
{ Smith, Tony }
then the last name of the first pair is greater than the last name of the second pair. So there is no need to compare also the first names. It is already clear that pair
{ Smith, Tony }
has to precede pair
{ Tomson, Ann }
On the other hand if you compare pairs
{ Smith, Tony }
{ Smith, John }
then the last names of the pairs are equal. So you need to compare the first names of the pairs. As John is less than Tony then it is clear that pair
{ Smith, John }
will precede pair
{ Smith, Tony }
though the last names (the first elements of the pairs) are equal.
As for this pair { polar angle, { x, y } } then if polar ahgles of two different pairs are equal then there will be compared { x, y } that in turn a pair. So if fird elements ( x ) are equal than there will be compared y(s).
It's actually when a you have vector/arrays of pairs you don't have to care about sorting when you use sort() function,You just use sort(v.begin(),v.end())-> it will be automatically sorted on the basis of first element and when first elements are equal they will compared using second element. See code and output in the link,it will be all clear. https://ideone.com/Ad2yVG .see code in link
I have two sets A and B. Set A contains unique elements. Set B contains all elements. Each element in the B is a 10 by 10 matrix where all entries are either 1 or 0. I need to scan through set B and everytime i encounter a new matrix i will add it to set A. Therefore set A is a subset of B containing only unique matrices.
It seems like you might really be looking for a way to manage a large, sparse array. Trivially, you could use a hash map with your giant index as your key, and your data as the value. If you talk more about your problem, we might be able to find a more appropriate data structure for your problem.
Update:
If set B is just some set of matrices and not the set of all possible 10x10 binary matrices, then you just want a sparse array. Every time you find a new matrix, you compute its key (which could simply be the matrix converted into a 100 digit binary value, or even a 100 character string!), look up that index. If no such key exists, insert the value 1 for that key. If the key does exist, increment and re-store the new value for that key.
Here is some code, maybe not very efficient :
# include <vector>
# include <bitset>
# include <algorithm>
// I assume your 10x10 boolean matrix is implemented as a bitset of 100 bits.
// Comparison of bitsets
template<size_t N>
class bitset_comparator
{
public :
bool operator () (const std::bitset<N> & a, const std::bitset<N> & b) const
{
for(size_t i = 0 ; i < N ; ++i)
{
if( !a[i] && b[i] ) return true ;
else if( !b[i] && a[i] ) return false ;
}
return false ;
}
} ;
int main(int, char * [])
{
std::set< std::bitset<100>, bitset_comparator<100> > A ;
std::vector< std::bitset<100> > B ;
// Fill B in some manner ...
// Keeping unique elements in A
std::copy(B.begin(), B.end(), std::inserter(A, A.begin())) ;
}
You can use std::listinstead of std::vector. The relative order of elements in B is not preserved in A (elements in A are sorted).
EDIT : I inverted A and B in my first post. It's correct now. Sorry for the inconvenience. I also corrected the comparison functor.
Each element in the B is a 10 by 10 matrix where all entries are either 1 or 0.
Good, that means it can be represented by a 100-bit number. Let's round that up to 128 bits (sixteen bytes).
One approach is to use linked lists - create a structure like (in C):
typedef struct sNode {
unsigned char bits[16];
struct sNode *next;
};
and maintain the entire list B as a sorted linked list.
The performance will be somewhat less (a) than using the 100-bit number as an array index into a truly immense (to the point of impossible given the size of the known universe) array.
When it comes time to insert a new item into B, insert it at its desired position (before one that's equal or greater). If it was a brand new one (you'll know this if the one you're inserting before is different), also add it to A.
(a) Though probably not unmanageably so - there are options you can take to improve the speed.
One possibility is to use skip lists, for faster traversal during searches. These are another pointer that references not the next element but one 10 (or 100 or 1000) elements along. That way you can get close to the desired element reasonably quickly and just do the one-step search after that point.
Alternatively, since you're talking about bits, you can divide B into (for example) 1024 sub-B lists. Use the first 10 bits of the 100-bit value to figure out which sub-B you need to use and only store the next 90 bits. That alone would increase search speed by an average of 1000 (use more leading bits and more sub-Bs if you need improvement on that).
You could also use a hash on the 100-bit value to generate a smaller key which you can use as an index into an array/list, but I don't think that will give you any real advantage over the method in the previous paragraph.
Convert each matrix into a string of 100 binary digits. Now run it through the Linux utilities:
sort | uniq
If you really need to do this in C++, it is possible to implement your own merge sort, then the uniq part becomes trivial.
You don't need N buckets where N is the number of all possible inputs. A binary tree will just do fine. This is implemented with set class in C++.
vector<vector<vector<int> > > A; // vector of 10x10 matrices
// fill the matrices in A here
set<vector<vector<int> > > B(A.begin(), A.end()); // voila!
// now B contains all elements in A, but only once for duplicates
I have an unsorted vector of eigenvalues and a related matrix of eigenvectors. I'd like to sort the columns of the matrix with respect to the sorted set of eigenvalues. (e.g., if eigenvalue[3] moves to eigenvalue[2], I want column 3 of the eigenvector matrix to move over to column 2.)
I know I can sort the eigenvalues in O(N log N) via std::sort. Without rolling my own sorting algorithm, how do I make sure the matrix's columns (the associated eigenvectors) follow along with their eigenvalues as the latter are sorted?
Typically just create a structure something like this:
struct eigen {
int value;
double *vector;
bool operator<(eigen const &other) const {
return value < other.value;
}
};
Alternatively, just put the eigenvalue/eigenvector into an std::pair -- though I'd prefer eigen.value and eigen.vector over something.first and something.second.
I've done this a number of times in different situations. Rather than sorting the array, just create a new array that has the sorted indices in it.
For example, you have a length n array (vector) evals, and a 2d nxn array evects. Create a new array index that has contains the values [0, n-1].
Then rather than accessing evals as evals[i], you access it as evals[index[i]] and instead of evects[i][j], you access it evects[index[i]][j].
Now you write your sort routine to sort the index array rather than the evals array, so instead of index looking like {0, 1, 2, ... , n-1}, the value in the index array will be in increasing order of the values in the evals array.
So after sorting, if you do this:
for (int i=0;i<n;++i)
{
cout << evals[index[i]] << endl;
}
you'll get a sorted list of evals.
this way you can sort anything that's associated with that evals array without actually moving memory around. This is important when n gets large, you don't want to be moving around the columns of the evects matrix.
basically the i'th smallest eval will be located at index[i] and that corresponds to the index[i]th evect.
Edited to add. Here's a sort function that I've written to work with std::sort to do what I just said:
template <class DataType, class IndexType>
class SortIndicesInc
{
protected:
DataType* mData;
public:
SortIndicesInc(DataType* Data) : mData(Data) {}
Bool operator()(const IndexType& i, const IndexType& j) const
{
return mData[i]<mData[j];
}
};
The solution purely relies on the way you store your eigenvector matrix.
The best performance while sorting will be achieved if you can implement swap(evector1, evector2) so that it only rebinds the pointers and the real data is left unchanged.
This could be done using something like double* or probably something more complicated, depends on your matrix implementation.
If done this way, swap(...) wouldn't affect your sorting operation performance.
The idea of conglomerating your vector and matrix is probably the best way to do it in C++. I am thinking about how I would do it in R and seeing if that can be translated to C++. In R it's very easy, simply evec<-evec[,order(eval)]. Unfortunately, I don't know of any built in way to perform the order() operation in C++. Perhaps someone else does, in which case this could be done in a similar way.