I have a question about c++ and a problem which I have to solve. Actually, I don't have any idea about solving this problem. I'd be delighted if anyone could help me and give me any clues. Thanks
I want to print the output for every 2 natural numbers a, b which make a/b fraction.
for example for the number (2) it should print 1 1 in the output because we don't consider (0,0) and the second point which we reach in the coordinate on the spiral path is (1,1).
the input should be a natural number and the output should be vertical and horizontal coordinate.
for example, the input is 12 and its output is 2 2.
more information is here in these photos:
Nice question, not that easy to implement.
The approach is to first analyze the requirements, then do a design, refine the design until it is OK and finally, at the very end, write the code.
Let us look at the requirements
We have coordinate system, where we will move in
Movement will follow a spiral pattern
Already occupied coordinates in the coordinate system, cannot be reused
Fractions shall be generated, derived from the position in the coordinate system
Since a position with a column 0 would result in an illegal 0-denominator, the coordinate system will not have a column 0.
So, after moving to a new position, a reduced unique fraction shall be generated
A given number of moves shall be performed
According to the description, it would be sufficient to just show the last reduced fraction as a result. However, the whole sequence is of interest
Good. Now we have analyzed those requirements and know, what should be done.
Next step. We start to think on how the requirements shall be implemented.
Obviously, we need some implementation of a “fraction” class. The fraction class will only store reduced fractions. The sign will be normalized. Meaning, if there are signs, they are evaluated and if the resulting fraction is negative, the numerator will get the minus sign. All fractions having a 0 as numerator, will get a 1 as denominator. We use the one, because later, during output we will suppress the output of “1” denominator and show just the numerator.
All this we will do during creation of the fraction using the constructor. That is somehow simple.
The requirement of unique fractions, can be solved by storing them in an appropriate container. Because the order is not important (only the uniqueness), we can use a std::unordered_set. This has the additional advantage that we have fast access via hash algorithms. For that we need to add a hash function and an equal operator to our fraction class.
For reducing fractions, we will use the standard approach and divide numerator and denominator by the greatest common divisor (GCD). Luckily C++ has a ready to use function for this.
Last, but not least, we will overwrite the inserter operator <<, for generating some output easily.
Now, moving along a spiral pattern. A spiral pattern can be created by folling the following approach.
We have a start position and a start direction
Depending on the direction we will look for a potential next position and next direction. The idea to get a spiral is the following:
If current direction is right, then the preferred next direction is up
If current direction is up, then the preferred next direction is left
If current direction is left, then the preferred next direction is down
If current direction is down, then the preferred next direction is right
All the above of course only, if that new position is not occupied
To check, if a position is occupied, we will store all visited positions as unique value in a container.
We again need select the std::unordered_set
We need to add a hash functor for this
If the potential new position, calculated with the above logic, is free (not occupied / not in our std::unordered_set), then we will add the new position to it.
If the potential next position is already occupied, then we continue to move in the original direction
Moving along columns, we have to skip column 0. It is not existing.
After having the new position, we create a (reduced) fraction out of it. The row of the position will be the numerator, the column will be the denominator
Then, we check, if we have seen this fraction already, by trying to insert it into the above described std::unordered_set for the fractions (do not mix up with container for positions)
If it could be inserted, then it was not in before, so, it is no duplicate, and can be put into the result-vector
All the above (and a little bit more) can now be implemented into code.
This will lead to one of many potential solutions:
#include <iostream>
#include <numeric>
#include <vector>
#include <utility>
#include <unordered_set>
struct RFraction {
// We will store reduced fractions. And show numerator and denominator separately
int numerator{};
int denominator{};
// Contructor will take a fraction, a nominater and denominater, reduce the fraction and mormalize the sign
RFraction(const int n, const int d) : numerator(std::abs(n)), denominator(std::abs(d)) {
if (n == 0) denominator = 1; // All faction with a 0 numerator will get a 1 as denominator
reduce(); // Reduce fraction
if ((n < 0 and d >= 0) or (n >= 0 and d < 0)) numerator = -numerator; // Handle sign uniformly
}
// Simple reduce function using standardapproach with gcd
void reduce() { int gcd = std::gcd(numerator, denominator); numerator /= gcd; denominator /= gcd; }
// Hash Functor
struct Hash { size_t operator()(const RFraction& r) const { return std::hash<int>()(r.numerator) ^ std::hash<int>()(r.denominator); } };
// Equality, basedon fraction and not on a double
bool operator == (const RFraction& other) const { return numerator == other.numerator and denominator == other.denominator; }
// Simple output
friend std::ostream& operator << (std::ostream& os, const RFraction& f) {
if (f.denominator == 1) os << f.numerator;
else os << f.numerator << " / " << f.denominator;
return os;
}
};
using Position = std::pair<int, int>;
struct PairHash { std::size_t operator() (const Position& p) const { return std::hash<int>()(p.first) ^ std::hash<int>()(p.second); } };
using Matrix = std::unordered_set<Position, PairHash>;
using Sequence = std::vector<RFraction>;
using UniqueRFraction = std::unordered_set<RFraction, RFraction::Hash>;
enum class Direction { right, up, left, down };
Sequence getSequence(const size_t maxIndex) {
// Current position and direction
Position position{ 0,2 };
Direction direction{ Direction::right };
// Here we will store all occupied positions
Matrix matrix{ {0,1}, {0,2} };
// Thsi is a helper to store in the end only unique fractions
UniqueRFraction uniqueRFraction{ {0,1} };
// Result. All unique fractions along the path
Sequence result{ {0,1} };
// Find all elements of the sequence
for (size_t k{}; k < maxIndex; ) {
Position potentialNextPosition(position);
// Depending of the current direction, we want to get new position and new direction
switch (direction) {
case Direction::right:
// Check position above. Get position above current position
++potentialNextPosition.first;
// Check, if this is already occupied
if (matrix.count(potentialNextPosition) == 0) {
// If not occupied then use this new direction and the new position
direction = Direction::up;
position = potentialNextPosition;
}
// Keep on going right
else if(++position.second == 0) ++position.second;
break;
case Direction::up:
// Check position left. Get position left of current position
if(--potentialNextPosition.second == 0) --potentialNextPosition.second;
// Check, if this is already occupied
if (matrix.count(potentialNextPosition) == 0) {
// If not occupied then use this new direction and the new position
direction = Direction::left;
position = potentialNextPosition;
}
// Keep on going up
else ++position.first;
break;
case Direction::left:
// Check position below. Get position below current position
--potentialNextPosition.first;
// Check, if this is already occupied
if (matrix.count(potentialNextPosition) == 0) {
// If not occupied then use this new direction and the new position
direction = Direction::down;
position = potentialNextPosition;
}
// Keep on going left
else if (--position.second == 0) --position.second;
break;
case Direction::down:
// Check position right. Get position right of current position
if (++potentialNextPosition.second == 0) ++potentialNextPosition.second;
// Check, if this is already occupied
if (matrix.count(potentialNextPosition) == 0) {
// If not occupied then use this new direction and the new position
direction = Direction::right;
position = potentialNextPosition;
}
// Keep on going down
else --position.first;
break;
}
// Add new position to the matrix, to indicate that it is occuppied
matrix.insert(position);
// Check, if the fraction, created out of the position, is unique. If so, the add it to the result
if (const auto [iter, isOK] = uniqueRFraction.insert({ position.first, position.second }); isOK) {
result.push_back(*iter);
++k;
}
}
return result;
}
int main() {
// Calculate the sequence of fractions up to a given index (index starts with 0)
Sequence seq = getSequence(25);
// Shwo the result
std::cout << "\nResult: \t" << seq.back() << "\n\nSequence:\n\n";
// And for debug purposes, also the complete sequence.
for (size_t k{}; k < seq.size(); ++k)
std::cout << k << '\t' << seq[k] << '\n';
return 0;
}
Related
So I have a list of 3D points and I want to group together points that are within 1 unit or less from each other. So here's an example of what I'm talking about (I'll use 2D points for the example),
Say we have point 1: (0,0) and point 2: (0,1) which are 1 distance from each other. The program will store both in a vector. Now here's a third point (0,2). This point is 1 distance from point 2 but not from point 1 but the program will still store it since it is within 1 distance from at least 1 point in the vector.
So I want to gather 3D points in "blobs" and everything that is 1 unit or less from this "blob" will be added onto the "blob"
I've tried so many different functions these past few days, tried recursion but always crashes and nested tons of forloops but I can't make this work.
Here's my code (I added in comments next to the code to make it easier to understand)
void combinePoints(vector<Point>& allPoints, vector< vector<Cavity> >& allPointBlobs, vector<Point>& tempBlob)
{
float check;
if(allPoints.size() != 0) //if statement to stop recursion once all the points from "allPoints" vector is checked and removed
{
for(int i = 0; i < allPoints.size(); i++) //3d distance formula checking first point with all other points
{
check = sqrt((allPoints[0].getX() - allPoints[i].getX()) * (allPoints[0].getX() - allPoints[i].getX()) +
(allPoints[0].getY() - allPoints[i].getY()) * (allPoints[0].getY() - allPoints[i].getY()) +
(allPoints[0].getZ() - allPoints[i].getZ()) * (allPoints[0].getZ() - allPoints[i].getZ()));
if ((check <= 1.000) && (check != 0 )) //once a point is found that is 1 distance or less, it is added to tempBlob vector and removed from allPoints
{
tempBlob.push_back(allPoints[0]);
tempBlob.push_back(allPoints[i]);
allPoints.erase(allPoints.begin() + i);
allPoints.erase(connollyPoints.begin());
break;
}
}
if(check > 1.000) //However, if no points are nearby, then tempBlob is finished finding all nearby points and is added to a vector and cleared so it can start finding another blob.
{
allPointBlobs.push_back(tempBlob);
tempBlob.clear();
cout << "Blob Done" << endl;
combinePoints(allPoints, allPointBlobs, tempBlob);
}
else
{
combinePoints2(allPoints, allPointBlobs, tempBlob);
}
}
}
void combinePoints2(vector<Point>& allPoints, vector< vector<Point> >& allPointBlobs, vector<Point>& tempBlob) //combinePoints2 is almost the same as the first one, except I changed the first part where it doesnt have to initiate a vector with first two points. This function will then check all points in the temporary blob against all other points and find ones that are 1 distance or less
{
cout << tempBlob.size() << endl; //I use this just to check if function is working
float check = 0;
if(allPoints.size() != 0)
{
for(int j = 0; j < tempBlob.size(); j++)
{
for(int k = 0; k < allPoints.size(); k++)
{
check = sqrt((tempBlob[j].getX() - allPoints[k].getX()) * (tempBlob[j].getX() - allPoints[k].getX()) +
(tempBlob[j].getY() - allPoints[k].getY()) * (tempBlob[j].getY() - allPoints[k].getY()) +
(tempBlob[j].getZ() - allPoints[k].getZ()) * (tempBlob[j].getZ() - allPoints[k].getZ()));
if ((check <= 1.000) && (check != 0 ))
{
tempBlob.push_back(allPoints[k]);
allPoints.erase(allPoints.begin() + k);
break;
}
}
if ((check <= 1.000) && (check != 0 ))
{
break;
}
}
if(check > 1.000)
{
allPointBlobs.push_back(tempBlob);
tempBlob.clear();
cout << "Blob Done" << endl;
combinePoints(allPoints, allPointBlobs, tempBlob);
}
else
{
combinePoints2(allPoints, allPointBlobs, tempBlob);
}
}
}
I use all the breaks because when a point is deleted from allPoints, it messes up the forloops since I'm using .size() for the amount of times it runs. This makes the program really slow since it has to keep reinitiating the function when it finds 1 point. I'm hoping someone can help me find a simpler way to do this.
I've made many other functions but they crash, this is the only one that is working so far (or at least i hope its working lol, it just doesnt crash which is a good sign).
Write a function that gets two points as arguments and determines if they are within a certain distance or use a class "Point" with a method that checks if a point passed as argument is within distance x of this point.
Store all points within distance of 1 in a simple vector to represent your "blob".
A simple naive class could look like this:
class Point{
public:
bool withinDistance(const Point& other) const;
void addProxyPoint(const Point& other);
private:
double x_cord, y_cord, ...;
std::vector<Point> proximities;
};
Although you can do this in several other ways, especially if you don't want to store the blob on the point object itself.
Or you can write a method that checks if a point is within distance and also adds it to your blob vector.
Your choice of course.
What would be the most efficient way to compute the fewest hops it takes to get from x1, y1 to x2, y2 on an unbounded/infinite chess board? Assume that from x1, y1 we can always generate a set of legal moves.
This sounds tailor made for BFS and I have implemented one successfully. But its space and time complexity seem atrocious if x2, y2 is arbitrarily large.
I have been looking at various other algorithms like A*, Bidirectional search, iterative deepening DFS etc but so far I am clueless as to which approach would yield the most optimal (and complete) solution. Is there some insight I am missing?
If the set of legal moves is independent of the current space, then this seems ideal as an integer linear programming (ILP) problem. You'd basically solve for the number of each type of move, such that the total number of moves is minimized. For instance, for a knight constrained to only move up and to the right (so that each move was either x+=1, y+=2 or x+=2, y+=1, you'd minimize a1+a2, subject to 2*a1+a2 == x2-x1, a1+2*a2 == y2-y1, a1 >= 0, a2 >= 0. While ILPs in general are NP-complete, I'd expect a standard hill-climbing algorithm to be able to solve it quite efficiently.
I don't have a complete proof yet, but I believe that if x1,y1 and x2,y2 are far away in both directions, then any optimal solution will have a lot of moves that move directly toward x2 and directly toward y2 (2 possible L-shaped moves that move in this direction). If the current position x is close to x2 but the current position y is far away from y2 for example, then alternate between the two moves that move two squares toward y2. And similarly if y is close to y2 and x and x2 are far away. Then, as soon as both the vertical and horizontal distance to x2,y2 are less than some rather small threshold (probably like 5 or 10), then you have to solve the problem with BFS or whatever to get the optimal solution, and the solution you get should be guaranteed to be optimal. I'll update my answer when I have a proof but I am almost certain this is true. If so, it means that no matter how far away x1,y1 and x2,y2 are from each other, you basically only have to solve a problem where the horizontal and vertical distances are like 5 or 10, which can be done quickly.
To expand on the discussion in comments, an uninformed search like breadth-first search (BFS) will find the optimal solution (the shortest path). However it only considers the cost-so-far g(n) for a node n and its cost increases exponentially with distance from source to target. To tame the cost of the search whilst still ensuring that the search finds the optimal solution, you need to add some information to the search algorithm via a heuristic, h(n).
Your case is a good fit for A* search, where the heuristic is a measure of distance from a node to the target (x2, y2). You could use the Euclidian distance "as the crow flies", but as you're considering a Knight then Manhattan distance might be more appropriate. Whatever measure you choose it has to be less (or equal to) the actual distance from the node to the target for the search to find the optimal solution (in this case the heuristic is known as "admissible"). Note that you need to divide each distance by a constant in order to get it to underestimate moves: divide by 3 for the Manhattan distance, and by sqrt(5) for the Euclidian distance (sqrt(5) is the length of the diagonal of a 2 by 1 square).
When you're running the algorithm you estimate the total distance f(n) from any node n that we've got to already as the distance so far plus the heuristic distance. I.e. f(n) = g(n) + h(n) where g(n) is the distance from (x1,y1) to node n and h(n) is the estimated heuristic distance from node n to (x2,y2). Given the nodes you've got to, you always choose the node n with the lowest f(n). I like the way you put it:
maintain a priority queue of nodes to be checked out ordered by g(n) + h(n).
If the heuristic is admissible then the algorithm finds the optimal solution because a suboptimal path can never be at the front of the priority queue: any fragment of the optimal path will always have a lower total distance (where, again, total distance is incurred distance plus heuristic distance).
The distance measure we've chosen here is monotonic (i.e. increases as the path lengthens rather than going up or down). In this case it's possible to show that it's efficient. As usual, see wikipedia or other sources on the web for more details. The Colorado state university page is particularly good and has nice discussions on optimality and efficiency.
Taking an example of going from (-200,-100) to (0,0), which is equivalent to your example of (0,0) to (200,100), in my implementation what we see with a Manhattan heuristic is as follows
The implementation does too much searching because with the heuristic h = Manhattan distance, taking steps of across 1 up 2 seem just as good as the optimal steps of across 2 up 1, i.e. the f() values don't distinguish the two. However the algorithm still finds the optimal solution of 100 moves. It takes 2118 steps, which is still a lot better than the breadth first search, which spreads out like an ink blot (I estimate it might take 20000 to 30000 steps).
How does it do if you choose the h = Euclidian distance?
This is a lot better! It only takes 104 steps, and it does so well because it incorporates our intuition that you need to head in roughly the right direction. But before we congratulate ourselves let's try another example, from (-200,0) to (0,0). Both heuristics find an optimal path of length 100. The Euclidian heuristic takes 12171 steps to find an optimal path, as shown below.
Whereas the Manhattan heuristic takes 16077 steps
Leaving aside the fact that the Manhattan heuristic does worse, again, I believe the real problem here is that there are multiple optimal paths. This isn't so strange: a re-ordering of an optimal path is still optimal. This fact is automatically taken into account by recasting the problem in a mathematical form along the lines of #Sneftel's answer.
In summary, A* with an admissible heuristic produces an optimal solution more efficiently than does BFS but it is likely that there are more efficient soluions out there. A* is a good default algorithm in cases where you can easily come up with a distance heuristic, and although in this case it isn't going to be the best solution, it's possible to learn a lot about the problem by implementing it.
Code below in C++ as you requested.
#include <memory>
using std::shared_ptr;
#include <vector>
using std::vector;
#include <queue>
using std::priority_queue;
#include <map>
using std::map;
using std::pair;
#include <math.h>
#include <iostream>
using std::cout;
#include <fstream>
using std::ofstream;
struct Point
{
short x;
short y;
Point(short _x, short _y) { x = _x; y = _y; }
bool IsOrigin() { return x == 0 && y == 0; }
bool operator<(const Point& p) const {
return pair<short, short>(x, y) < pair<short, short>(p.x, p.y);
}
};
class Path
{
Point m_end;
shared_ptr<Path> m_prev;
int m_length; // cached
public:
Path(const Point& start)
: m_end(start)
{ m_length = 0; }
Path(const Point& start, shared_ptr<Path> prev)
: m_end(start)
, m_prev(prev)
{ m_length = m_prev->m_length +1; }
Point GetEnd() const { return m_end; }
int GetLength() const { return m_length; }
vector<Point> GetPoints() const
{
vector<Point> points;
for (const Path* curr = this; curr; curr = curr->m_prev.get()) {
points.push_back(curr->m_end);
}
return points;
}
double g() const { return m_length; }
//double h() const { return (abs(m_end.x) + abs(m_end.y)) / 3.0; } // Manhattan
double h() const { return sqrt((m_end.x*m_end.x + m_end.y*m_end.y)/5); } // Euclidian
double f() const { return g() + h(); }
};
bool operator<(const shared_ptr<Path>& p1, const shared_ptr<Path>& p2)
{
return 1/p1->f() < 1/p2->f(); // priority_queue has biggest at end of queue
}
int main()
{
const Point source(-200, 0);
const Point target(0, 0);
priority_queue<shared_ptr<Path>> q;
q.push(shared_ptr<Path>(new Path(source)));
map<Point, short> endPath2PathLength;
endPath2PathLength.insert(map<Point, short>::value_type(source, 0));
int pointsExpanded = 0;
shared_ptr<Path> path;
while (!(path = q.top())->GetEnd().IsOrigin())
{
q.pop();
const short newLength = path->GetLength() + 1;
for (short dx = -2; dx <= 2; ++dx){
for (short dy = -2; dy <= 2; ++dy){
if (abs(dx) + abs(dy) == 3){
const Point newEnd(path->GetEnd().x + dx, path->GetEnd().y + dy);
auto existingEndPath = endPath2PathLength.find(newEnd);
if (existingEndPath == endPath2PathLength.end() ||
existingEndPath->second > newLength) {
q.push(shared_ptr<Path>(new Path(newEnd, path)));
endPath2PathLength[newEnd] = newLength;
}
}
}
}
pointsExpanded++;
}
cout<< "Path length " << path->GetLength()
<< " (points expanded = " << pointsExpanded << ")\n";
ofstream fout("Points.csv");
for (auto i : endPath2PathLength) {
fout << i.first.x << "," << i.first.y << "," << i.second << "\n";
}
vector<Point> points = path->GetPoints();
ofstream fout2("OptimalPoints.csv");
for (auto i : points) {
fout2 << i.x << "," << i.y << "\n";
}
return 0;
}
Note this isn't very well tested so there may well be bugs but I hope the general idea is clear.
I'm developing a structure that is like a binary tree but generalized across dimensions so you can set whether it is a binary tree, quadtree, octree, etc by setting the dimension parameter during initialization.
Here is the definition of it:
template <uint Dimension, typename StateType>
class NDTree {
public:
std::array<NDTree*, cexp::pow(2, Dimension)> * nodes;
NDTree * parent;
StateType state;
char position; //position in parents node list
bool leaf;
NDTree const &operator[](const int i) const
{
return (*(*nodes)[i]);
}
NDTree &operator[](const int i)
{
return (*(*nodes)[i]);
}
}
So, to initialize it- I set a dimension and then subdivide. I am going for a quadtree of depth 2 for illustration here:
const uint Dimension = 2;
NDTree<Dimension, char> tree;
tree.subdivide();
for(int i=0; i<tree.size(); i++)
tree[i].subdivide();
for(int y=0; y<cexp::pow(2, Dimension); y++) {
for(int x=0; x<cexp::pow(2, Dimension); x++) {
tree[y][x].state = ((y)*10)+(x);
}
}
std::cout << tree << std::endl;
This will result in a quadtree, the state of each of the values are initialized to [0-4][0-4].
([{0}{1}{2}{3}][{10}{11}{12}{13}][{20}{21}{22}{23}][{30}{31}{32}{33}])
I am having trouble finding adjacent nodes from any piece. What it needs to do is take a direction and then (if necessary) traverse up the tree if the direction goes off of the edge of the nodes parent (e.g. if we were on the bottom right of the quadtree square and we needed to get the piece to the right of it). My algorithm returns bogus values.
Here is how the arrays are laid out:
And here are the structures necessary to know for it:
This just holds the direction for items.
enum orientation : signed int {LEFT = -1, CENTER = 0, RIGHT = 1};
This holds a direction and whether or not to go deeper.
template <uint Dimension>
struct TraversalHelper {
std::array<orientation, Dimension> way;
bool deeper;
};
node_orientation_table holds the orientations in the structure. So in 2d, 0 0 refers to the top left square (or left left square).
[[LEFT, LEFT], [RIGHT, LEFT], [LEFT, RIGHT], [RIGHT, RIGHT]]
And the function getPositionFromOrientation would take LEFT, LEFT and return 0. It is just basically the opposite of the node_orientation_table above.
TraversalHelper<Dimension> traverse(const std::array<orientation, Dimension> dir, const std::array<orientation, Dimension> cmp) const
{
TraversalHelper<Dimension> depth;
for(uint d=0; d < Dimension; ++d) {
switch(dir[d]) {
case CENTER:
depth.way[d] = CENTER;
goto cont;
case LEFT:
if(cmp[d] == RIGHT) {
depth.way[d] = LEFT;
} else {
depth.way[d] = RIGHT;
depth.deeper = true;
}
break;
case RIGHT:
if(cmp[d] == LEFT) {
depth.way[d] = RIGHT;
} else {
depth.way[d] = LEFT;
depth.deeper = true;
}
break;
}
cont:
continue;
}
return depth;
}
std::array<orientation, Dimension> uncenter(const std::array<orientation, Dimension> dir, const std::array<orientation, Dimension> cmp) const
{
std::array<orientation, Dimension> way;
for(uint d=0; d < Dimension; ++d)
way[d] = (dir[d] == CENTER) ? cmp[d] : dir[d];
return way;
}
NDTree * getAdjacentNode(const std::array<orientation, Dimension> direction) const
{
//our first traversal pass
TraversalHelper<Dimension> pass = traverse(direction, node_orientation_table[position]);
//if we are lucky the direction results in one of our siblings
if(!pass.deeper)
return (*(*parent).nodes)[getPositionFromOrientation<Dimension>(pass.way)];
std::vector<std::array<orientation, Dimension>> up; //holds our directions for going up the tree
std::vector<std::array<orientation, Dimension>> down; //holds our directions for going down
NDTree<Dimension, StateType> * tp = parent; //tp is our tree pointer
up.push_back(pass.way); //initialize with our first pass we did above
while(true) {
//continue going up as long as it takes, baby
pass = traverse(up.back(), node_orientation_table[tp->position]);
std::cout << pass.way << " :: " << uncenter(pass.way, node_orientation_table[tp->position]) << std::endl;
if(!pass.deeper) //we've reached necessary top
break;
up.push_back(pass.way);
//if we don't have any parent we must explode upwards
if(tp->parent == nullptr)
tp->reverseBirth(tp->position);
tp = tp->parent;
}
//line break ups and downs
std::cout << std::endl;
//traverse upwards combining the matrices to get our actual position in cube
tp = const_cast<NDTree *>(this);
for(int i=1; i<up.size(); i++) {
std::cout << up[i] << " :: " << uncenter(up[i], node_orientation_table[tp->position]) << std::endl;
down.push_back(uncenter(up[i], node_orientation_table[tp->parent->position]));
tp = tp->parent;
}
//make our way back down (tp is still set to upmost parent from above)
for(const auto & i : down) {
int pos = 0; //we need to get the position from an orientation list
for(int d=0; d<i.size(); d++)
if(i[d] == RIGHT)
pos += cexp::pow(2, d); //consider left as 0 and right as 1 << dimension
//grab the child of treepointer via position we just calculated
tp = (*(*tp).nodes)[pos];
}
return tp;
}
For an example of this:
std::array<orientation, Dimension> direction;
direction[0] = LEFT; //x
direction[1] = CENTER; //y
NDTree<Dimension> * result = tree[3][0]->getAdjacentNode(direction);
This should grab the top right square within bottom left square, e.g. tree[2][1] which would have a value of 21 if we read its state. Which works since my last edit (algorithm is modified). Still, however, many queries do not return correct results.
//Should return tree[3][1], instead it gives back tree[2][3]
NDTree<Dimension, char> * result = tree[1][2].getAdjacentNode({ RIGHT, RIGHT });
//Should return tree[1][3], instead it gives back tree[0][3]
NDTree<Dimension, char> * result = tree[3][0].getAdjacentNode({ RIGHT, LEFT });
There are more examples of incorrect behavior such as tree[0][0](LEFT, LEFT), but many others work correctly.
Here is the folder of the git repo I am working from with this. Just run g++ -std=c++11 main.cpp from that directory if it is necessary.
here is one property you can try to exploit:
consider just the 4 nodes:
00 01
10 11
Any node can have up to 4 neighbor nodes; two will exist in the same structure (larger square) and you have to look for the other two in neighboring structures.
Let's focus on identifying the neighbors which are in the same structure: the neighbors for 00 are 01 and 10; the neighbors for 11 are 01 and 10. Notice that only one bit differs between neighbor nodes and that neighbors can be classified in horizontal and vertical. SO
00 - 01 00 - 01 //horizontal neighbors
| |
10 11 //vertical neighbors
Notice how flipping the MSB gets the vertical neighbor and flipping the LSB gets the horizontal node? Let's have a close look:
MSB: 0 -> 1 gets the node directly below
1 -> 0 sets the node directly above
LSB: 0 -> 1 gets the node to the right
1 -> 0 gets the node to the left
So now we can determine the node's in each direction assuming they exist in the same substructure. What about the node to the left of 00 or above 10?? According to the logic so far if you want a horizontal neighbor you should flip the LSB; but flipping it would retrieve 10 ( the node to the right). So let's add a new rule, for impossible operations:
you can't go left for x0 ,
you can't go right for x1,
you can't go up for 0x,
you can't go down for 1x
*impossible operations refers to operations within the same structure.
Let's look at the bigger picture which are the up and left neighbors for 00? if we go left for 00 of strucutre 0 (S0), we should end up with 01 of(S1), and if we go up we end up with node 10 of S(2). Notice that they are basically the same horizontal/ veritical neighbor values form S(0) only they are in different structures. So basically if we figure out how to jump from one structure to another we have an algorithm.
Let's go back to our example: going up from node 00 (S0). We should end up in S2; so again 00->10 flipping the MSB. So if we apply the same algorithm we use within the structure we should be fine.
Bottom line:
valid transition within a strucutres
MSB 0, go down
1, go up
LSB 0, go right
1, go left
for invalid transitions (like MSB 0, go up)
determine the neighbor structure by flipping the MSB for vertical and LSB for vertical
and get the neighbor you are looking for by transforming a illegal move in structure A
into a legal one in strucutre B-> S0: MSB 0 up, becomes S2:MSB 0 down.
I hope this idea is explicit enough
Check out this answer for neighbor search in octrees: https://stackoverflow.com/a/21513573/3146587. Basically, you need to record in the nodes the traversal from the root to the node and manipulate this information to generate the required traversal to reach the adjacent nodes.
The simplest answer I can think of is to get back your node from the root of your tree.
Each cell can be assigned a coordinate mapping to the deepest nodes of your tree. In your example, the (x,y) coordinates would range from 0 to 2dimension-1 i.e. 0 to 3.
First, compute the coordinate of the neighbour with whatever algorithm you like (for instance, decide if a right move off the edge should wrap to the 1st cell of the same row, go down to the next row or stay in place).
Then, feed the new coordinates to your regular search function. It will return the neighbour cell in dimension steps.
You can optimize that by looking at the binary value of the coordinates. Basically, the rank of the most significant bit of difference tells you how many levels up you should go.
For instance, let's take a quadtree of depth 4. Coordinates range from 0 to 15.
Assume we go left from cell number 5 (0101b). The new coordinate is 4 (0100b). The most significant bit changed is bit 0, which means you can find the neighbour in the current block.
Now if you go right, the new coordinate is 6 (0110b), so the change is affecting bit 1, which means you have to go up one level to access your cell.
All this being said, the computation time and volume of code needed to use such tricks seems hardly worth the effort to me.
I've been trying to do this shortest path problem and I realised that the way I was trying to it was almost completely wrong and that I have no idea to complete it.
The question requires you to find the shortest path from one point to another given a text file of input.
The input looks like this with the first value representing how many levels there are.
4
14 10 15
13 5 22
13 7 11
5
This would result in an answer of: 14+5+13+11+5=48
The question asks for the shortest path from the bottom left to the top right.
The way I have attempted to do this is to compare the values of either path possible and then add them to a sum. e.g the first step from the input I provided would compare 14 against 10 + 15. I ran into the problem that if both values are the same it will stuff up the rest of the working.
I hope this makes some sense.
Any suggestions on an algorithm to use or any sample code would be greatly appreciated.
Assume your data file is read into a 2D array of the form:
int weights[3][HEIGHT] = {
{14, 10, 15},
{13, 5, 22},
{13, 7, 11},
{X, 5, X}
};
where X can be anything, doesn't matter. For this I'm assuming positive weights and therefore there is never a need to consider a path that goes "down" a level.
In general you can say that the minimum cost is lesser of the following 2 costs:
1) The cost of rising a level: The cost of the path to the opposite side from 1 level below, plus the cost of coming up.
2) The cost of moving across a level : The cost of the path to the opposite from the same level, plus the cost of coming across.
int MinimumCost(int weight[3][HEIGHT]) {
int MinCosts[2][HEIGHT]; // MinCosts[0][Level] stores the minimum cost of reaching
// the left node of that level
// MinCosts[1][Level] stores the minimum cost of reaching
// the right node of that level
MinCosts[0][0] = 0; // cost nothing to get to the start
MinCosts[0][1] = weight[0][1]; // the cost of moving across the bottom
for (int level = 1; level < HEIGHT; level++) {
// cost of coming to left from below right
int LeftCostOneStep = MinCosts[1][level - 1] + weight[2][level - 1];
// cost of coming to left from below left then across
int LeftCostTwoStep = MinCosts[0][level - 1] + weight[0][level - 1] + weight[1][level];
MinCosts[0][level] = Min(LeftCostOneStep, LeftCostTwoStep);
// cost of coming to right from below left
int RightCostOneStep = MinCosts[0][level - 1] + weight[0][level - 1];
// cost of coming to right from below right then across
int RightCostTwoStep = MinCosts[1][level - 1] + weight[1][level - 1] + weight[1][level];
MinCosts[1][level] = Min(RightCostOneStep, RightCostTwoStep);
}
return MinCosts[1][HEIGHT - 1];
}
I haven't double checked the syntax, please only use it to get a general idea of how to solve the problem. You could also rewrite the algorithm so that MinCosts uses constant memory, MinCosts[2][2] and your whole algorithm could become a state machine.
You could also use dijkstra's algorithm to solve this, but that's a bit like killing a fly with a nuclear warhead.
My first idea was to represent the graph with a matrix and then run a DFS or Dijkstra to solve it. But for this given question, we can do better.
So, here is a possible solution of this problem that runs in O(n). 2*i means left node of level i and 2*i+1 means right node of level i. Read the comments in this solution for an explanation.
#include <stdio.h>
struct node {
int lup; // Cost to go to level up
int stay; // Cost to stay at this level
int dist; // Dist to top right node
};
int main() {
int N;
scanf("%d", &N);
struct node tab[2*N];
// Read input.
int i;
for (i = 0; i < N-1; i++) {
int v1, v2, v3;
scanf("%d %d %d", &v1, &v2, &v3);
tab[2*i].lup = v1;
tab[2*i].stay = tab[2*i+1].stay = v2;
tab[2*i+1].lup = v3;
}
int v;
scanf("%d", &v);
tab[2*i].stay = tab[2*i+1].stay = v;
// Now the solution:
// The last level is obvious:
tab[2*i+1].dist = 0;
tab[2*i].dist = v;
// Now, for each level, we compute the cost.
for (i = N - 2; i >= 0; i--) {
tab[2*i].dist = tab[2*i+3].dist + tab[2*i].lup;
tab[2*i+1].dist = tab[2*i+2].dist + tab[2*i+1].lup;
// Can we do better by staying at the same level ?
if (tab[2*i].dist > tab[2*i+1].dist + tab[2*i].stay) {
tab[2*i].dist = tab[2*i+1].dist + tab[2*i].stay;
}
if (tab[2*i+1].dist > tab[2*i].dist + tab[2*i+1].stay) {
tab[2*i+1].dist = tab[2*i].dist + tab[2*i+1].stay;
}
}
// Print result
printf("%d\n", tab[0].dist);
return 0;
}
(This code has been tested on the given example.)
Use a depth-first search and add only the minimum values. Then check which side is the shortest stair. If it's a graph problem look into a directed graph. For each stair you need 2 vertices. The cost from ladder to ladder can be something else.
The idea of a simple version of the algorithm is the following:
define a list of vertices (places where you can stay) and edges (walks you can do)
every vertex will have a list of edges connecting it to other vertices
for every edge store the walk length
for every vertex store a field with 1000000000 with the meaning "how long is the walk to here"
create a list of "active" vertices initialized with just the starting point
set the walk-distance field of starting vertex with 0 (you're here)
Now the search algorithm proceeds as
pick the (a) vertex from the "active list" with lowest walk_distance and remove it from the list
if the vertex is the destination you're done.
otherwise for each edge in that vertex compute the walk distance to the other_vertex as
new_dist = vertex.walk_distance + edge.length
check if the new distance is shorter than other_vertex.walk_distance and in this case update other_vertex.walk_distance to the new value and put that vertex in the "active list" if it's not already there.
repeat from 1
If you run out of nodes in the active list and never processed the destination vertex it means that there was no way to reach the destination vertex from the starting vertex.
For the data structure in C++ I'd use something like
struct Vertex {
double walk_distance;
std::vector<struct Edge *> edges;
...
};
struct Edge {
double length;
Vertex *a, *b;
...
void connect(Vertex *va, Vertex *vb) {
a = va; b = vb;
va->push_back(this); vb->push_back(this);
}
...
};
Then from the input I'd know that for n levels there are 2*n vertices needed (left and right side of each floor) and 2*(n-1) + n edges needed (one per each stair and one for each floor walk).
For each floor except the last you need to build three edges, for last floor only one.
I'd also allocate all edges and vertices in vectors first, fixing the pointers later (post-construction setup is an anti-pattern but here is to avoid problems with reallocations and still maintaining things very simple).
int n = number_of_levels;
std::vector<Vertex> vertices(2*n);
std::vector<Edge> edges(2*(n-1) + n);
for (int i=0; i<n-1; i++) {
Vertex& left = &vertices[i*2];
Vertex& right = &vertices[i*2 + 1];
Vertex& next_left = &vertices[(i+1)*2];
Vertex& next_right = &vertices[(i+1)*2 + 1];
Edge& dl_ur = &edges[i*3]; // down-left to up-right stair
Edge& dr_ul = &edges[i*3+1]; // down-right to up-left stair
Edge& floor = &edges[i*3+2];
dl_ur.connect(left, next_right);
dr_ul.connect(right, next_left);
floor.connect(left, right);
}
// Last floor
edges.back().connect(&vertex[2*n-2], &vertex[2*n-1]);
NOTE: untested code
EDIT
Of course this algorithm can solve a much more general problem where the set of vertices and edges is arbitrary (but lengths are non-negative).
For the very specific problem a much simpler algorithm is possible, that doesn't even need any data structure and that can instead compute the result on the fly while reading the input.
#include <iostream>
#include <algorithm>
int main(int argc, const char *argv[]) {
int n; std::cin >> n;
int l=0, r=1000000000;
while (--n > 0) {
int a, b, c; std::cin >> a >> b >> c;
int L = std::min(r+c, l+b+c);
int R = std::min(r+b+a, l+a);
l=L; r=R;
}
int b; std::cin >> b;
std::cout << std::min(r, l+b) << std::endl;
return 0;
}
The idea of this solution is quite simple:
l variable is the walk_distance for the left side of the floor
r variable is the walk_distance for the right side
Algorithm:
we initialize l=0 and r=1000000000 as we're on the left side
for all intermediate steps we read the three distances:
a is the length of the down-left to up-right stair
b is the length of the floor
c is the length of the down-right to up-left stair
we compute the walk_distance for left and right side of next floor
L is the minimum between r+c and l+b+c (either we go up starting from right side, or we go there first starting from left side)
R is the minimum betwen l+a and r+b+a (either we go up starting from left, or we start from right and cross the floor first)
for the last step we just need to chose what is the minimum between r and coming there from l by crossing the last floor
I have a planar graph embedded on a plane (plane graph ) and want to search its faces.
The graph is not connected but consists of several connected graphs, which are not separately adressable (e.g. a subgraph can be contained in the face of another graph)
I want to find the polygons (faces) which include a certain 2d point.
The polygons are formed by the faces of the graphs. As the number of faces is quite big I would like to avoid to determine them beforehand.
What is the general complexity of such a search and what c++ library/ coding approach can I use to accomplish it.
Updated to clarify: I am refering to a graph in the xy plane here
You pose an interesting challenge. A relatively simple solution is possible if the polygon happens always to be convex, because in that case one need only ask whether the point of interest lies on the same flank (whether left or right) of all the polygon's sides. Though I know of no especially simple solution for the general case, the following code does seem to work for any, arbitrary polygon, as inspired indirectly by Cauchy's famous integral formula.
One need not be familiar with Cauchy to follow the code, for comments within the code explain the technique.
#include <vector>
#include <cstddef>
#include <cstdlib>
#include <cmath>
#include <iostream>
// This program takes its data from the standard input
// stream like this:
//
// 1.2 0.5
// -0.1 -0.2
// 2.7 -0.3
// 2.5 2.9
// 0.1 2.8
//
// Given such input, the program answers whether the
// point (1.2, 0.5) does not lie within the polygon
// whose vertices, in sequence, are (-0.1, -0.2),
// (2.7, -0.3), (2.5, 2.9) and (0.1, 2.8). Naturally,
// the program wants at least three vertices, so it
// requires the input of at least eight numbers (where
// the example has four vertices and thus ten numbers).
//
// This code lacks really robust error handling, which
// could however be added without too much trouble.
// Also, its function angle_swept() could be shortened
// at cost to readability; but this is not done here,
// since the function is already hard enough to grasp as
// it stands.
//
//
const double TWOPI = 8.0 * atan2(1.0, 1.0); // two times pi, or 360 deg
namespace {
struct Point {
double x;
double y;
Point(const double x0 = 0.0, const double y0 = 0.0)
: x(x0), y(y0) {}
};
// As it happens, for the present code's purpose,
// a Point and a Vector want exactly the same
// members and operations; thus, make the one a
// synonym for the other.
typedef Point Vector;
std::istream &operator>>(std::istream &ist, Point &point) {
double x1, y1;
if(ist >> x1 >> y1) point = Point(x1, y1);
return ist;
}
// Calculate the vector from one point to another.
Vector operator-(const Point &point2, const Point &point1) {
return Vector(point2.x - point1.x, point2.y - point1.y);
}
// Calculate the dot product of two Vectors.
// Overload the "*" operator for this purpose.
double operator*(const Vector &vector1, const Vector &vector2) {
return vector1.x*vector2.x + vector1.y*vector2.y;
}
// Calculate the (two-dimensional) cross product of two Vectors.
// Overload the "%" operator for this purpose.
double operator%(const Vector &vector1, const Vector &vector2) {
return vector1.x*vector2.y - vector1.y*vector2.x;
}
// Calculate a Vector's magnitude or length.
double abs(const Vector &vector) {
return std::sqrt(vector.x*vector.x + vector.y*vector.y);
}
// Normalize a vector to unit length.
Vector unit(const Vector &vector) {
const double abs1 = abs(vector);
return Vector(vector.x/abs1, vector.y/abs1);
}
// Imagine standing in the plane at the point of
// interest, facing toward a vertex. Then imagine
// turning to face the next vertex without leaving
// the point. Answer this question: through what
// angle did you just turn, measured in radians?
double angle_swept(
const Point &point, const Point &vertex1, const Point &vertex2
) {
const Vector unit1 = unit(vertex1 - point);
const Vector unit2 = unit(vertex2 - point);
const double dot_product = unit1 * unit2;
const double cross_product = unit1 % unit2;
// (Here we must be careful. Either the dot
// product or the cross product could in theory
// be used to extract the angle but, in
// practice, either the one or the other may be
// numerically problematical. Use whichever
// delivers the better accuracy.)
return (fabs(dot_product) <= fabs(cross_product)) ? (
(cross_product >= 0.0) ? (
// The angle lies between 45 and 135 degrees.
acos(dot_product)
) : (
// The angle lies between -45 and -135 degrees.
-acos(dot_product)
)
) : (
(dot_product >= 0.0) ? (
// The angle lies between -45 and 45 degrees.
asin(cross_product)
) : (
// The angle lies between 135 and 180 degrees
// or between -135 and -180 degrees.
((cross_product >= 0.0) ? TWOPI/2.0 : -TWOPI/2.0)
- asin(cross_product)
)
);
}
}
int main(const int, char **const argv) {
// Read the x and y coordinates of the point of
// interest, followed by the x and y coordinates of
// each vertex in sequence, from std. input.
// Observe that whether the sequence of vertices
// runs clockwise or counterclockwise does
// not matter.
Point point;
std::vector<Point> vertex;
std::cin >> point;
{
Point point1;
while (std::cin >> point1) vertex.push_back(point1);
}
if (vertex.size() < 3) {
std::cerr << argv[0]
<< ": a polygon wants at least three vertices\n";
std::exit(1);
}
// Standing as it were at the point of interest,
// turn to face each vertex in sequence. Keep
// track of the total angle through which you
// have turned.
double cumulative_angle_swept = 0.0;
for (size_t i = 0; i < vertex.size(); ++i) {
// In an N-sided polygon, vertex N is again
// vertex 0. Since j==N is out of range,
// if i==N-1, then let j=0. Otherwise,
// let j=i+1.
const size_t j = (i+1) % vertex.size();
cumulative_angle_swept +=
angle_swept(point, vertex[i], vertex[j]);
}
// Judge the point of interest to lie within the
// polygon if you have turned a complete circuit.
const bool does_the_point_lie_within_the_polygon =
fabs(cumulative_angle_swept) >= TWOPI/2.0;
// Output.
std::cout
<< "The angle swept by the polygon's vertices about the point\n"
<< "of interest is " << cumulative_angle_swept << " radians ("
<< ((360.0/TWOPI)*cumulative_angle_swept) << " degrees).\n"
<< "Therefore, the point lies "
<< (
does_the_point_lie_within_the_polygon
? "within" : "outside of"
)
<< " the polygon.\n";
return !does_the_point_lie_within_the_polygon;
}
Of course, the above code is just something I wrote, because your challenge was interesting and I wanted to see if I could meet it. If your application is important, then you should both test and review the code, and please revise back here any bugs you find. I have tested the code against two or three cases, and it seems to work, but for important duty it would want more exhaustive testing.
Good luck with your application.