I've been trying to implement a three dimensional BSP tree to render single objects (a cube, a box with a cylinder in it, etc.) which are transparent. As far as I understand, this should work, but it is not, and I can't figure out why. Everything I've read refers to BSP tree being used in either two dimensions or on multiple objects, so I was wondering if I've just generally misunderstood what BSP trees can be applied to rather than had an error in my code. I've looked at a lot of things online, and my code seems to be the same as Bretton Wade's (ftp://ftp.sgi.com/other/bspfaq/faq/bspfaq.html), so if anybody has any samples of BSP code for single objects/transparency in particular, that would be wonderful.
Thank you.
BSP trees can be abstracted to any N-dimensional space since by definition a N-dimensional hyperplane will bisect a space into two parts. In other words, for a given point in N-dimensional space, it must either be on the hyperplane, or in one of the bisected spaces that the hyperplane creates in the N-dimensional space.
For 2D, a BSP tree would be created by drawing a line, and then testing on what side of that line a point was. This is because a line bisects a 2D-space. For 3D, you would need a plane, which would typically be formed from the normal to the polygonal surface that you're using as the test.
So your algorithm would be something like the following:
Create a queue containing all the polys from the cube. It would be best to randomly add the polys to the queue rather than in some order.
Pop a poly from the front of the queue ... make this the "root" of the BSP tree.
Create a normal from that poly
Pop another poly from the queue
Test whether all the points in that poly are in front of or behind the normal created from the root.
If they are all in-front, then make that poly the right-child of the root. If they are all behind, make that poly the left-child of the root.
If all the points in the poly are not in front or behind the plane defined by the root polygon's normal, then you'll need to split the poly into two parts for the portions that are in-front and behind the plane. For the two new polys created from this split, add them to the back of the queue, and repeat from step #4.
Pop another poly from the queue.
Test this poly against the root. Since the root has a child, once you test whether the poly is in-front or behind the root (keeping in mind step #7 that may require a split), test the poly against the child node that is on the right if it's in-front, or the child node on the left if it's behind. If there is no child-node, then you can stop moving through the tree, and add the polygon as to the tree as that child.
For any child node you run into where the current poly is not in-front or behind, you'll need to perform the split in step #7 and then go back to step #4.
Keep repeating this process until the queue is empty.
Code for this algorithm would conceptually look something like:
struct bsp_node
{
std::vector<poly_t> polys;
bsp_node* rchild;
bsp_node* lchild;
bsp_node(const poly_t& input): rchild(NULL), lchild(NULL)
{
polys.push_back(input);
}
};
std::queue<poly_t> poly_queue;
//...add all the polygons in the scene randomly to the queue
bsp_node* bsp_root = new bsp_node(poly_queue.front());
poly_queue.pop();
while(!poly_queue.empty())
{
//grab a poly from the queue
poly_t current_poly = poly_queue.front();
poly_queue.pop();
//search the binary tree
bsp_node* current_node = bsp_root;
bsp_node* prev_node = NULL;
bool stop_search = false;
while(current_node != NULL && !stop_search)
{
//use a plane defined by the current_node to test current_poly
int result = test(current_poly, current_node);
switch(result)
{
case COINCIDENT:
stop_search = true;
current_node->polys.push_back(current_poly);
break;
case IN_FRONT:
prev_node = current_node;
current_node = current_node->rchild;
break;
case BEHIND:
prev_node = current_node;
current_node = current_node->lchild;
break;
//split the poly and add the newly created polygons back to the queue
case SPLIT:
stop_search = true;
split_current_poly(current_poly, poly_queue);
break;
}
}
//if we reached a NULL child, that means we can add the poly to the tree
if (!current_node)
{
if (prev_node->rchild == NULL)
prev_node->rchild = new bsp_node(current_poly);
else
prev_node->lchild = new bsp_node(current_poly);
}
}
Once you've completed the creation of the tree, you can then do an in-order search of the tree and get the polygons sorted from back-to-front. It won't matter if the objects are transparent or not, since you're sorting based on the polys themselves, not their material properties.
Related
So I have a sorting problem. What I want to do is construct an octree around the camera position, copy the leaf node data to a temporary container, then reset the octree and enter a new camera position. After this, check if any of the new node positions are in the old leaf nodes, and only update the data in the positions that have changed. The result should be that the positions closest to the camera have a higher level of detail and a larger number of nodes, as the octree is subdivided based on how close the node's center is to the camera position.
The image below illustrates this simply. If the black boxes represent the octree leaf nodes from the old camera position, none of them would be included in the new set as their centers are not the same as any of the pink boxes.
Obviously, the result I'm getting with my code isn't what it should be. There are huge areas of terrain that don't load the new positions properly/overlap with existing terrain.
I'm using an unordered set for the leaf nodes, as each one's position should be unique, and an unordered map for the chunks I want to load, so that I can load the correct chunk based on the key which is the position of the leafnode.
In the code, I update the list of chunks to create only if the amount of nodes in the world changes: if (tempNodes.size() != _octree->leafNodes.size())
The rest of the logic pertains to what I've already been explaining. It might just be some simple mistake, but I also might just be misinterpreting the way certain member functions work.
void LandscapeManager::updateCenter(glm::ivec3 newCenter)
{
std::unordered_set<Octree*> tempNodes;
tempNodes = _octree->leafNodes;
// reconstruct octree
delete _octree;
_octree = new Octree(glm::vec3(0, 0, 0), glm::vec3(pow(2, 8), pow(2, 8), pow(2, 8)));
// update camera position in octree
_octree->leafNodes.clear();
_octree->insert(newCenter);
std::cout << "Generating: " << _octree->leafNodes.size() << " chunks...\n";
if (tempNodes.size() != _octree->leafNodes.size())
{
// clear chunk list and refresh with new list generated by code below
_chunkLoader.Clear();
for (auto i : _octree->leafNodes)
{
// add a chunk to the chunkloader if it's position does not exist in the old octree's leaf nodes
if (!tempNodes.count(i))
{
_chunkLoader.Add(i->getOrigin(), i->getHalfSize());
}
if (tempNodes.count(i))
{
// remove unchanged leaf nodes from the old list so that all
// the remains is a list of changed leaf nodes, which will be used to delete old chunks
tempNodes.erase(tempNodes.find(i));
}
}
for (auto i : tempNodes)
{
if (_chunks.count(i->getOrigin()))
{
// unload and delete the meshes that have changed
auto unload = _chunks.find(i->getOrigin());
_chunks.at(i->getOrigin())->Unload();
delete _chunks.at(i->getOrigin());
_chunks.erase(unload);
}
}
}
_center = newCenter;
}
I'm tring to write rubick's cube solver based on BFS algorithm. It finds way if there's one shuffle done (one wall moved). There's problem with memory when I'm doing more complicated shuffe.
I have written cube, so one can does moves on it, so the moves work well. I'm checking if the cube is solve by comparing it with the new one (not shuffle). I know it's not perfect, but it should work anyway...
theres some code:
void search(Node &problem)
{
int flag;
Node *curr;
Node* mvs[12]; //moves
std::vector<Cube> explored; //list of position cube's already been in
std::queue<Node*> Q; //queue of possible ways
if (problem.isgoal() == true) return problem.state;
Q.push(&problem);
explored.push_back(problem.state);
while (!Q.empty())
{
flag = 0;
curr = Q.front();
if (curr->isgoal() == true)
{
return curr->state;
}
if (std::find(explored.begin(), explored.end(), curr->state)!=explored.end()) //checking if cube's been in curr position
{
flag = 1;
break;
}
if (flag == 1) break;
explored.push_back(Q.front()->state);
for (int i = 0; i < 12; i++) {
Q.push(curr->expand(i)); //expand is method that
//spread tree of possible moves from curr node
}
Q.pop();
}
}
TLDR; Too Broad.
As mentioned by #tarkmeper, rubik's cube have a huge number of combinations.
A simple shuffling algorithm will not give you an answer. I would suggest you to make algorithms which solve cube based on it's initial state. As I solve the cube myself, there are 2 basic methods:
1. Solve the cube layer by layer which is beginner's method https://www.youtube.com/watch?v=MaltgJGz-dU
2. CFOP(Cross F2l(First 2 Layers) OLL PLL(oll, pll are algorithms))
https://www.youtube.com/watch?v=WzE7SyDB8vA (Pretty advanced)
There have been machines developed to solve the cube but they take input as images of the cube.
I think implementing CFOP could actually solve your issue as it does not check for random shuffles of the cube but actually solves it systematically, but it would be very difficult.
For your implementation it would be much better to take data as a matrix.
A rubik's cube has 3 parts: 1. Center(1 Color) 2. Edge(2 Color) 3.Corner (3 Color)
There are 6 centers 12 egdes 8 corners. You would also have to take into account valid initial states as you cannot randomize it.
What I could think up right now about a problem of this scale is to make 4 algorithms:
Cross():
.... which takes the cube and makes the cross which is basically all white edges
aligned to white center and their 2nd colored center.
F2L():
.... to make 2nd layers of the cube with the corner pieces of the first layer,
this could use backtracking as there are lot of invalid case occurences
OLL():
.... based on your state of cube after F2L transformation there is a straight
forward set of moves, Same for PLL():
Getting to the bare bones of the cube itself:
You would also need to implement moves which are F, F', R, R', L, L', B, B'.
These are moves on the cube the ones with " ' " denote moving that face in anticlockwise direction with respect to the current face of the cube you are looking at.Imagine you are holding the cube, F is for front in clockwise, R is right in clockwise, L is left in clockwise, B is back in clockwise.
Rubics cubes have a huge number of possible states (https://www.quora.com/In-how-many-ways-can-a-Rubiks-cube-be-arranged).
Conceptually your algorithm might need to go need to include all 43,252,003,274,489,856,000 states into the queue before it hits the correct result. You won't have this much memory to do a breadth-first search.
Some of the states are not lead to solutions because they don't belong to their rotations they belong to all permutation. let's Consider an example 1,2,3,4.
3,1,2,4 are not found in rotations of 1,2,3,4 it's found in all permutations
Some answers require 18 moves and when you have at least 12 moves every step you have 12^18 using breadths first search at worst. Computers are fast but not fast enough to solve the cube using BFS. It harder to see if it is possible to store all solution in a database as only moves that solve the cube are needed to be stored, but this is likely (see end game Chess tables).
Suppose you've a set s of horizontal line segments in the plane described by a starting point p, an end point q and a y-value.
We can assume that all values of p and qare pairwise distinct and no two segments overlap.
I want to compute the "lower contour" of the segment.
We can sort s by p and iterate through each segment j. If i is the "active" segment and j->y < i->y we "switch to" j (and output the corresponding contour element).
However, what can we do, when no such j exists and we find a j with i->q < j->p. Then, we would need to switch to the "next higher segment". But how do we know that segment? I can't find a way such that the resulting algorithm would have a running time of O(n log n). Any ideas?
A sweep line algorithm is an efficient way to solve your problem. As explained previously by Brian, we can sort all the endpoints by the x-coordinate and process them in order. An important distinction to make here is that we are sorting the endpoints of the segment and not the segments in order of increasing starting point.
If you imagine a vertical line sweeping from left to right across your segments, you will notice two things:
At any position, the vertical line either intersects a set of segments or nothing. Let's call this set the active set. The lower contour is the segment within the active set with the smallest y-coordinate.
The only x-coordinates where the lower contour can change are the segment endpoints.
This immediately brings one observation: the lower contour should be a list of segments. A list of points does not provide sufficient information to define the contour, which can be undefined at certain x-coordinates (where there are no segments).
We can model the active set with an std::set ordered by the y position of the segment. Processing the endpoints in order of increasing x-coordinate. When encountering a left endpoint, insert the segment. When encountering a right endpoint, erase the segment. We can find the active segment with the lowest y-coordinate with set::begin() in constant time thanks to the ordering. Since each segment is only ever inserted once and erased once, maintaining the active set takes O(n log n) time in total.
In fact, it is possible to maintain a std::multiset of only the y-coordinates for each segment that intersects the sweep line, if it is easier.
The assumption that the segments are non-overlapping and have distinct endpoints is not entirely necessary. Overlapping segments are handled both by the ordered set of segments and the multiset of y-coordinates. Coinciding endpoints can be handled by considering all endpoints with the same x-coordinate at one go.
Here, I assume that there are no zero-length segments (i.e. points) to simplify things, although they can also be handled with some additional logic.
std::list<segment> lower_contour(std::list<segment> segments)
{
enum event_type { OPEN, CLOSE };
struct event {
event_type type;
const segment &s;
inline int position() const {
return type == OPEN ? s.sp : s.ep;
}
};
struct order_by_position {
bool operator()(const event& first, const event& second) {
return first.position() < second.position();
}
};
std::list<event> events;
for (auto s = segments.cbegin(); s != segments.cend(); ++s)
{
events.push_back( event { OPEN, *s } );
events.push_back( event { CLOSE, *s } );
}
events.sort(order_by_position());
// maintain a (multi)set of the y-positions for each segment that intersects the sweep line
// the ordering allows querying for the lowest segment in O(log N) time
// the multiset also allows overlapping segments to be handled correctly
std::multiset<int> active_segments;
bool contour_is_active = false;
int contour_y;
int contour_sp;
// the resulting lower contour
std::list<segment> contour;
for (auto i = events.cbegin(); i != events.cend();)
{
auto j = i;
int current_position = i->position();
while (j != events.cend() && j->position() == current_position)
{
switch (j->type)
{
case OPEN: active_segments.insert(j->s.y); break;
case CLOSE: active_segments.erase(j->s.y); break;
}
++j;
}
i = j;
if (contour_is_active)
{
if (active_segments.empty())
{
// the active segment ends here
contour_is_active = false;
contour.push_back( segment { contour_sp, current_position, contour_y } );
}
else
{
// if the current lowest position is different from the previous one,
// the old active segment ends here and a new active segment begins
int current_y = *active_segments.cbegin();
if (current_y != contour_y)
{
contour.push_back( segment { contour_sp, current_position, contour_y } );
contour_y = current_y;
contour_sp = current_position;
}
}
}
else
{
if (!active_segments.empty())
{
// a new contour segment begins here
int current_y = *active_segments.cbegin();
contour_is_active = true;
contour_y = current_y;
contour_sp = current_position;
}
}
}
return contour;
}
As Brian also mentioned, a binary heap like std::priority_queue can also be used to maintain the active set and tends to outperform std::set, even if it does not allow arbitrary elements to be deleted. You can work around this by flagging a segment as removed instead of erasing it. Then, repeatedly remove the top() of the priority_queue if it is a flagged segment. This might end up being faster, but it may or may not matter for your use case.
First sort all the endpoints by x-coordinate (both starting and ending points). Iterate through the endpoints and keep a std::set of all the y-coordinates of active segments. When you reach a starting point, add its y-coordinate to the set and "switch" to it if it's the lowest; when you reach an ending point, remove its y-coordinate from the set and recalculate the lowest y-coordinate using the set. This gives an O(n log n) solution overall.
A balanced binary search tree such as that used to implement std::set generally has a large constant factor. You can speed up this approach by using a binary heap (std::priority_queue) instead of a set, with the lowest y-coordinate at the root. In this case, you can't remove a non-root node, but when you reach such an ending point, just mark the segment inactive in an array. When the root node is popped, continue popping until there is a new root node that hasn't been marked inactive already. I think this will be about twice as fast as the set-based approach, but you'll have to code it yourself and see, if that's a concern.
By now, I have used the following algorithm for finding the strongly connected components of a graph.
call DFS(G) to compute the finishing time f[v] for every vertex v, sort the vertices of G in decreasing order of their finishing time;
compute the transpose GT of G;
Perform another DFS on G, this time in the main for-loop we go through the vertices of G
in the decreasing order of f[v];
output the vertices of each tree in the DFS forest (formed by the second DFS) as a separate
strongly connected component.
.
But I was wondering if it is possible to find all the strongly connected components in only one DFS.
Any help in this regard would be highly appreciated.
Check out, Algorithm Design Manual by Steven Skiena. It calculates SCC in one DFS. It's based on the concept of oldest reachable vertex.
Initialize each vertex's reachable vertex and SCComponent# to itself in the beginning.
low[i] = i;
scc[i] = -1;
Do a DFS on the digraph, you are interested only in back edges and cross edges because these two edges will tell you if you've encountered a back edge and entering 1 component from another.
int edge_classification(int x, int y)
{
if (parent[y] == x) return(TREE);
if (discovered[y] && !processed[y]) return(BACK);
if (processed[y] && (entry_time[y]>entry_time[x])) return(FORWARD);
if (processed[y] && (entry_time[y]<entry_time[x])) return(CROSS);
printf("Warning: unclassified edge (%d,%d)\n",x,y);
}
So when you encounter these edges, you set reachable vertex[] recursively, based on the entry times.
if (class == BACK) {
if (entry_time[y] < entry_time[ low[x] ] )
low[x] = y;
}
if (class == CROSS)
{
if (scc[y] == -1) /* component not yet assigned */
if (entry_time[y] < entry_time[ low[x] ] )
low[x] = y;
}
A new strongly connected component is found whenever the lowest reachable vertex from vertex 'v' is itself (loop can say, a->b->c->a, lowest reachable vertex of a is a).
process_vertex_early(int v)
{
push(&active,v);
}
After DFS for a vertex is complete (DFS for it's neighbors would have been completed too), check the lowest reachable vertex for it:
if (low[v] == v)
{ /* edge (parent[v],v) cuts off scc */
pop_component(v);
}
if (entry_time[low[v]] < entry_time[low[parent[v]]])
low[parent[v]] = low[v];
pop_component(...) just pops from the stack until this component is found. If a->b->c->a is scanned the stack will have a(bottom)->b->c(top).. pop until vertex 'a' is seen. You get an SCC for 'a'..and similarly you get all connected components in one DFS.
I found this on the Wikipedia page for Strongly connected component:
Kosaraju's algorithm, Tarjan's algorithm and the path-based strong
component algorithm all efficiently compute the strongly connected
components of a directed graph, but Tarjan's and the path-based
algorithm are favoured in practice since they require only one
depth-first search rather than two.
I think this quite answers your question :)
I am trying to build KD Tree (static case). We assume points are sorted on both x and y coordinates.
For even depth of recursion the set is split into two subsets with a vertical line going through median x coordinate.
For odd depth of recursion the set is split into two subsets with a horizontal line going through median y coordinate.
The median can be determined from sorted set according to x / y coordinate. This step I am doing before each splitting of the set. And I think that it causes the slow construction of the tree.
Please could you help me check any and optimize the code?
I can not find the k-th nearest neighbor, could somebody help me with the code?
Thank you very much for your help and patience...
Please see the sample code:
class KDNode
{
private:
Point2D *data;
KDNode *left;
KDNode *right;
....
};
void KDTree::createKDTree(Points2DList *pl)
{
//Create list
KDList kd_list;
//Create KD list (all input points)
for (unsigned int i = 0; i < pl->size(); i++)
{
kd_list.push_back((*pl)[i]);
}
//Sort points by x
std::sort(kd_list.begin(), kd_list.end(), sortPoints2DByY());
//Build KD Tree
root = buildKDTree(&kd_list, 1);
}
KDNode * KDTree::buildKDTree(KDList *kd_list, const unsigned int depth)
{
//Build KD tree
const unsigned int n = kd_list->size();
//No leaf will be built
if (n == 0)
{
return NULL;
}
//Only one point: create leaf of KD Tree
else if (n == 1)
{
//Create one leaft
return new KDNode(new Point2D ((*kd_list)[0]));
}
//At least 2 points: create one leaf, split tree into left and right subtree
else
{
//New KD node
KDNode *node = NULL;
//Get median index
const unsigned int median_index = n/2;
//Create new KD Lists
KDList kd_list1, kd_list2;
//The depth is even, process by x coordinate
if (depth%2 == 0)
{
//Create new median node
node = new KDNode(new Point2D( (*kd_list)[median_index]));
//Split list
for (unsigned int i = 0; i < n; i++)
{
//Geta actual point
Point2D *p = &(*kd_list)[i];
//Add point to the first list: x < median.x
if (p->getX() < (*kd_list)[median_index].getX())
{
kd_list1.push_back(*p);
}
//Add point to the second list: x > median.x
else if (p->getX() > (*kd_list)[median_index].getX())
{
kd_list2.push_back(*p);
}
}
//Sort points by y for the next recursion step: slow construction of the tree???
std::sort(kd_list1.begin(), kd_list1.end(), sortPoints2DByY());
std::sort(kd_list2.begin(), kd_list2.end(), sortPoints2DByY());
}
//The depth is odd, process by y coordinates
else
{
//Create new median node
node = new KDNode(new Point2D((*kd_list)[median_index]));
//Split list
for (unsigned int i = 0; i < n; i++)
{
//Geta actual point
Point2D *p = &(*kd_list)[i];
//Add point to the first list: y < median.y
if (p->getY() < (*kd_list)[median_index].getY())
{
kd_list1.push_back(*p);
}
//Add point to the second list: y < median.y
else if (p->getY() >(*kd_list)[median_index].getY())
{
kd_list2.push_back(*p);
}
}
//Sort points by x for the next recursion step: slow construction of the tree???
std::sort(kd_list1.begin(), kd_list1.end(), sortPoints2DByX());
std::sort(kd_list2.begin(), kd_list2.end(), sortPoints2DByX());
}
//Build left subtree
node->setLeft( buildKDTree(&kd_list1, depth +1 ) );
//Build right subtree
node->setRight( buildKDTree(&kd_list2, depth + 1 ) );
//Return new node
return node;
}
}
The sorting to find the median is probably the worst culprit here, since that is O(nlogn) while the problem is solvable in O(n) time. You should use nth_element instead: http://www.cplusplus.com/reference/algorithm/nth_element/. That'll find the median in linear time on average, after which you can split the vector in linear time.
Memory management in vector is also something that can take a lot of time, especially with large vectors, since every time the vector's size is doubled all the elements have to be moved. You can use the reserve method of vector to reserve exactly enough space for the vectors in the newly created nodes, so they need not increase dynamically as new stuff is added with push_back.
And if you absolutely need the best performance, you should use lower level code, doing away with vector and reserving plain arrays instead. Nth element or 'selection' algorithms are readily available and not too hard to write yourself: http://en.wikipedia.org/wiki/Selection_algorithm
Some hints on optimizing the kd-tree:
Use a linear time median finding algorithm, such as QuickSelect.
Avoid actually using "node" objects. You can store whole tree using the points only, with ZERO additional information. Essentially by just sorting an array of objects. The root node will then be in the middle. A rearrangement that puts the root first, then uses a heap layout will likely be nicer to the CPU memory cache on query time, but more tricky to build.
Not really an answer to your questions, but I would highly recommend the forum at http://ompf.org/forum/
They have some great discussions over there for fast kd-tree constructions in various contexts. Perhaps you'll find some inspiration over there.
Edit:
The OMPF forums have since gone down, although a direct replacement is currently available at http://ompf2.com/
Your first culprit is sorting to find the median. This is almost always the bottleneck for K-d tree construction, and using more efficient algorithms here will really pay off.
However, you're also constructing a pair of variable-sized vectors each time you split and transferring elements to them.
Here I recommend the good ol' singly-linked list. The beauty of the linked list is that you can transfer elements from parent to child by simply changing next pointers to point at the child's root pointer instead of the parent's.
That means no heap overhead whatsoever during construction to transfer elements from parent nodes to child nodes, only to aggregate the initial list of elements to insert to the root. That should do wonders as well, but if you want even faster, you can use a fixed allocator to efficiently allocate nodes for the linked list (as well as for the tree) and with better contiguity/cache hits.
Last but not least, if you're involved in intensive computing tasks that call for K-d trees, you need a profiler. Measure your code and you'll see exactly what lies at the culprit, and with exact time distributions.