For this program, I am given a set of inputs that I need to store in an adjacency matrix. I've done this, so I have an adjacency matrix Matrix[11][11]. Now, using this matrix, I need to perform a depth first search and return the pi values.
I have the pseudocode for this, so I believe that I need two methods: DFS(graph) and DFS-VISIT(node). However, I'm having trouble actually implementing this. Can I do this using the adjacency matrix directly or do I somehow need to create a graph using the matrix? Any help with actually coding this would be appreciated.
DFS(G)
for each u ∈ V[G] do
color[u] = WHITE
∏[u] = NIL
time = 0
for each u ∈ V[G] do
if color[u] = WHITE then
DFS-VISIT(u)
DFS-VISIT(u)
color[u] = GRAY
time++
d[u] = time
for each v ∈ Adj[u] do
if color[v] = WHITE then
∏[v] = u
DFS-VISIT(v)
color[u] = BLACK
time++
f[u] = time
The pseudo-code you have there seems to assume an adjacency list.
Specifically this code: (indentation corresponding to code blocks assumed)
for each v ∈ Adj[u] do
if color[v] = white then
∏[v] = u
DFS-VISIT(v)
The difference is: with an adjacency matrix, all the vertices are there, and one typically uses 0/1 flags to indicate whether there's an edge between the current and target vertices.
So, you should loop through all vertices for that row in the adjacency matrix, and only do something when the flag is set to 1.
That part of the pseudo-code will look something like:
for v = 1 to n do // assuming 1-indexed
if color[v] = white && AdjMatrix[u][v] == 1 then
∏[v] = u
DFS-VISIT(v)
As far as I can tell, the rest of the psuedo-code should look identical.
Generally it is preferred to code DFS assuming graph to be represented as an adjacency list because the time complexity that results is O(|V| + |E|). But with adjacency matrix the time complexity becomes O(|V|*|V|). Below is an implementation of dfs assuming adjacency matrix representation:
#define WHITE 0
#define GRAY 1
#define BLACK 2
int time_;
vector<int> color(n, WHITE), par(n, 0), strt(n, 0), fin(n, 0);
vector<vector<int> > G(n, vector<int>(n, 0));
void dfs_visit(int);
void DFS(){
for(int i = 0; i < n; i++)
color[i] = 0, par[i] = -1;
time = 0;
for(int j = 0; j < n; i++)
if(color[j] == WHITE)
dfs_visit(j);
}
}
void dfs_visit(int u){
time_++;
strt[u] = time_;
color[u] = GRAY;
for(int v = 0; v < n && v++)
if(G[u][v] && color[v] == WHITE){
par[v] = u;
dfs_visit(v);
}
color[u] = BLACK;
time_++;
fin[u] = time_;
}
The par[] matrix calculates parent of each vertex and strt[] and fin[] matrices time stamp the vertices. Vertices are 0-based numbered.
Related
I am applying a 3D Voronoi pattern on a mesh. Using those loops, I am able to compute the cell position, an id and the distance.
But I would like to compute a normal based on the generated pattern.
How can I generate a normal or reorient the current normal based on this pattern and associated cells ?
The aim is to provide a faced look for the mesh. Each cell's normal should point in the same direction and adjacent cells point in different directions. Those directions should be based on the original mesh normals, I don't want to totally break mesh normals and have those points in random directions.
Here's how I generate the Voronoi pattern.
float3 p = floor(position);
float3 f = frac(position);
float id = 0.0;
float distance = 10.0;
for (int k = -1; k <= 1; k++)
{
for (int j = -1; j <= 1; j++)
{
for (int i = -1; i <= 1; i++)
{
float3 cell = float3(float(i), float(j), float(k));
float3 random = hash3(p + cell);
float3 r = cell - f + random * angleOffset;
float d = dot(r, r);
if (d < distance)
{
id = random;
distance = d;
cellPosition = cell + p;
normal = ?
}
}
}
}
And here's the hash function :
float3 hash3(float3 x)
{
x = float3(dot(x, float3(127.1, 311.7, 74.7)),
dot(x, float3(269.5, 183.3, 246.1)),
dot(x, float3(113.5, 271.9, 124.6)));
return frac(sin(x)*43758.5453123);
}
This looks like a fairly expensive fragment shader, it might more sense to bake out a normal map than to try to do this in real time.
It's hard to tell what your shader is doing but I think it's checking every pixel against a 3x3 grid of voronoi cells. One weird thing is that random is a vec3 that somehow gets assigned to id, which is just a scalar.
Anyway, it sounds like you would like to perturb the mesh-supplied normal by a random vector, but you'd like all pixels corresponding to a particular voronoi cell to be perturbed in the same way.
Since you already have a variable called random which presumably represents a random value generated deterministically as a function of the voronoi cell, you could just use that. For example, the following would perturb the normal by a small amount:
normal = normalize(meshNormal + 0.2 * normalize(random));
If you want to give more weight to the random component, just increase the 0.2 constant.
I'm writing a brute-force algorithm to solve vertex cover like this:
BruteForceVertexCover( Graph G = (V,E) ){
for size= 1 ... |V|
vector<int> v = {0...size-1}
do{
if(test(G, v)) return v; //test if v covers G
}
while(v has next combinations of lenght size);
return empty vector<int>;
}
//this stops when find a answer, and it will find,
//since it tries all combinations of all sizes
where
bool test( Graph G = (V,E), vector<int> v ){
for each u in v:
for each w in G[u]
remove u from G[w] //this is linear in size of vector G[w]
clear G[v] //removed all (bidirectional) edges that have u
for i = 0 ... |V|
if(G[i] is not empty) return false;
return true;
}
im trying this to lots of graphs (but their maximum size is 20 vertices) and this is taking a decade... is there any optimization I can do on this brute force so it run faster?
I want to compute K*es where K is an Eigen matrix (dimension pxp) and es is a px1 random binary vector with 1s.
For example if p=5 and t=2 a possible es is [1,0,1,0,0]' or [0,0,1,1,0]' and so on...
How do I easily generate es with Eigen?
I came up with even a better solution, which is a combination of std::vector, Egien::Map and std::shuffle.
std::vector<int> esv(p,0);
std::fill_n(esv.begin(),t,1);
Eigen::Map<Eigen::VectorXi> es (esv.data(), esv.size());
std::random_device rd;
std::mt19937 g(rd());
std::shuffle(std::begin(esv), std::end(esv), g);
This solution is memory efficient (since Eigen::Map doesn't copy esv) and has the big advantage that if we want to permute es several times (like in this case), then we just need to repeat std::shuffle(std::begin(esv), std::end(esv), g);
Maybe I'm wrong, but this solution seems more elegant and efficient than the previous ones.
So you're using Eigen. I'm not sure what matrix type you're using, but I'll go off the class Eigen::MatrixXd.
What you need to do is:
Create a 1xp matrix that's all 0
Choose random spots to flip from 0 to 1 that are between 0-p, and make sure that spot is unique.
The following code should do the trick, although you could implement it other ways.
//Your p and t
int p = 5;
int t = 2;
//px1 matrix
MatrixXd es(1, p);
//Initialize the whole 1xp matrix
for (int i = 0; i < p; ++i)
es(1, i) = 0;
//Get a random position in the 1xp matrix from 0-p
for (int i = 0; i < t; ++i)
{
int randPos = rand() % p;
//If the position was already a 1 and not a 0, get a different random position
while (es(1, randPos) == 1)
randPos = rand() % p;
//Change the random position from a 0 to a 1
es(1, randPos) = 1;
}
When t is close to p, Ryan's method need to generate much more than t random numbers. To avoid this performance degrade, you could solve your original problem
find t different numbers from [0, p) that are uniformly distributed
by the following steps
generate t uniformly distributed random numbers idx[t] from [0, p-t+1)
sort these numbers idx[t]
idx[i]+i, i=0,...,t-1 are the result
The code:
VectorXi idx(t);
VectorXd es(p);
es.setConstant(0);
for(int i = 0; i < t; ++i) {
idx(i) = int(double(rand()) / RAND_MAX * (p-t+1));
}
std::sort(idx.data(), idx.data() + idx.size());
for(int i = 0; i < t; ++i) {
es(idx(i)+i) = 1.0;
}
I am writing this question fishing for any state-of-the-art software or methods that can quickly compute the intersection of N 2D polygons (the convex hulls of projected convex polyhedrons), and M 2D polygons where typically N >> M. N may be in the order or at least 1M polygons and N in the order 50k. I've searched for some time now, but I keep coming up with the same answer shown below.
Use boost and a loop to
compute the projection of the polyhedron (not the bottleneck)
compute the convex hull of said polyhedron (bottleneck)
compute the intersection of the projected polyhedron and existing 2D polygon (major bottleneck).
This loop is repeated NK times where typically K << M, and K is the average number of 2D polygons intersecting a single projected polyhedron. This is done to reduce the number of computations.
The problem with this is that if I have N=262144 and M=19456 it takes about 129 seconds (when multithreaded by polyhedron), and this must be done about 300 times. Ideally, I would like to reduce the computation time to about 1 second for the above sizes, so I was wondering if someone could help point to some software or literature that could improve efficiency.
[EDIT]
#sehe's request I'm posting the most relevant parts of the code. I haven't compiled it, so this is just to get the gist... this code assumes, there are voxels and pixels, but the shapes can be anything. The order of the points in the grid can be any, but the indices of where the points reside in the grid are the same.
#include <boost/geometry/geometry.hpp>
#include <boost/geometry/geometries/point.hpp>
#include <boost/geometry/geometries/ring.hpp>
const std::size_t Dimension = 2;
typedef boost::geometry::model::point<float, Dimension, boost::geometry::cs::cartesian> point_2d;
typedef boost::geometry::model::polygon<point_2d, false /* is cw */, true /* closed */> polygon_2d;
typedef boost::geometry::model::box<point_2d> box_2d;
std::vector<float> getOverlaps(std::vector<float> & projected_grid_vx, // projected voxels
std::vector<float> & pixel_grid_vx, // pixels
std::vector<int> & projected_grid_m, // number of voxels in each dimension
std::vector<int> & pixel_grid_m, // number of pixels in each dimension
std::vector<float> & pixel_grid_omega, // size of the pixel grid in cm
int projected_grid_size, // total number of voxels
int pixel_grid_size) { // total number of pixels
std::vector<float> overlaps(projected_grid_size * pixel_grid_size);
std::vector<float> h(pixel_grid_m.size());
for(int d=0; d < pixel_grid_m.size(); d++) {
h[d] = (pixel_grid_omega[2*d+1] - pixel_grid_omega[2*d]) / pixel_grid_m[d];
}
for(int i=0; i < projected_grid_size; i++){
std::vector<float> point_indices(8);
point_indices[0] = i;
point_indices[1] = i + 1;
point_indices[2] = i + projected_grid_m[0];
point_indices[3] = i + projected_grid_m[0] + 1;
point_indices[4] = i + projected_grid_m[0] * projected_grid_m[1];
point_indices[5] = i + projected_grid_m[0] * projected_grid_m[1] + 1;
point_indices[6] = i + (projected_grid_m[1] + 1) * projected_grid_m[0];
point_indices[7] = i + (projected_grid_m[1] + 1) * projected_grid_m[0] + 1;
std::vector<float> vx_corners(8 * projected_grid_m.size());
for(int vn = 0; vn < 8; vn++) {
for(int d = 0; d < projected_grid_m.size(); d++) {
vx_corners[vn + d * 8] = projected_grid_vx[point_indices[vn] + d * projeted_grid_size];
}
}
polygon_2d proj_voxel;
for(int vn = 0; vn < 8; vn++) {
point_2d poly_pt(vx_corners[2 * vn], vx_corners[2 * vn + 1]);
boost::geometry::append(proj_voxel, poly_pt);
}
boost::geometry::correct(proj_voxel);
polygon_2d proj_voxel_hull;
boost::geometry::convex_hull(proj_voxel, proj_voxel_hull);
box_2d bb_proj_vox;
boost::geometry::envelope(proj_voxel_hull, bb_proj_vox);
point_2d min_pt = bb_proj_vox.min_corner();
point_2d max_pt = bb_proj_vox.max_corner();
// then get min and max indices of intersecting bins
std::vector<float> min_idx(projected_grid_m.size() - 1),
max_idx(projected_grid_m.size() - 1);
// compute min and max indices of incidence on the pixel grid
// this is easy assuming you have a regular grid of pixels
min_idx[0] = std::min( (float) std::max( std::floor((min_pt.get<0>() - pixel_grid_omega[0]) / h[0] - 0.5 ), 0.), pixel_grid_m[0]-1);
min_idx[1] = std::min( (float) std::max( std::floor((min_pt.get<1>() - pixel_grid_omega[2]) / h[1] - 0.5 ), 0.), pixel_grid_m[1]-1);
max_idx[0] = std::min( (float) std::max( std::floor((max_pt.get<0>() - pixel_grid_omega[0]) / h[0] + 0.5 ), 0.), pixel_grid__m[0]-1);
max_idx[1] = std::min( (float) std::max( std::floor((max_pt.get<1>() - pixel_grid_omega[2]) / h[1] + 0.5 ), 0.), pixel_grid_m[1]-1);
// iterate only over pixels which intersect the projected voxel
for(int iy = min_idx[1]; iy <= max_idx[1]; iy++) {
for(int ix = min_idx[0]; ix <= max_idx[0]; ix++) {
int idx = ix + iy * pixel_grid_size[0]; // `first' index of pixel corner point
polygon_2d pix_poly;
for(int pn = 0; pn < 4; pn++) {
point_2d pix_corner_pt(
pixel_grid_vx[idx + pn % 2 + (pn / 2) * pixel_grid_m[0]],
pixel_grid_vx[idx + pn % 2 + (pn / 2) * pixel_grid_m[0] + pixel_grid_size]
);
boost::geometry::append(pix_poly, pix_corner_pt);
}
boost::geometry::correct( pix_poly );
//make this into a convex hull since the order of the point may be any
polygon_2d pix_hull;
boost::geometry::convex_hull(pix_poly, pix_hull);
// on to perform intersection
std::vector<polygon_2d> vox_pix_ints;
polygon_2d vox_pix_int;
try {
boost::geometry::intersection(proj_voxel_hull, pix_hull, vox_pix_ints);
} catch ( std::exception e ) {
// skip since these may coincide at a point or line
continue;
}
// both are convex so only one intersection expected
vox_pix_int = vox_pix_ints[0];
overlaps[i + idx * projected_grid_size] = boost::geometry::area(vox_pix_int);
}
} // end intersection for
} //end projected_voxel for
return overlaps;
}
You could create the ratio of polygon to bounding box:
This could be done computationally once to arrive at an avgerage poly area to BB ratio R constant.
Or you could do it with geometry using a circle bounded by its BB Since your using only projected polyhedron:
R = 0.0;
count = 0;
for (each poly) {
count++;
R += polyArea / itsBoundingBoxArea;
}
R = R/count;
Then calculate the summation of intersection of bounding boxes.
Sbb = 0.0;
for (box1, box2 where box1.isIntersecting(box2)) {
Sbb += box1.intersect(box2);
}
Then:
Approximation = R * Sbb
All of this would not work if concave polys were allowed. Because a concave poly can occupy less than 1% of it's bounding box. You will still have to find the convex hull.
Alternatively, If you can find the polygons area quicker than its hull, you could use the actual computed average poly area. This would give you a decent approximation as well while avoiding both poly intersection and wrapping.
Hm, the problem seems similar to doing "collision-detection" i game-engines. Or "potentially visible sets".
While I don't know much about the current state-of-the-art, i remember an optimization was to enclose objects in spheres, since checking overlaps between spheres (or circles in 2D) is really cheap.
In order to speed-up checks for collisions, objects were often put into search-structures (e.g. a sphere-tree (circle-tree in 2D case)). Basically organizing the space into a hierarchical structure, to make queries for overlaps fast.
So basically my suggestion boils down to: Try looking at algorithms for collision-detection i game-engines.
Assumption
I'm assuming that you mean "intersections" and not intersection. Moreover, It is not the expected use case that most of the individual polys from M and N will overlap at the same time. If this assumption is true then:
Answer
The way this is done with 2D game engines is by having a scene graph where every object has a bounding box. Then place all the the polygons into a node in an quadtree according to their location determined by bounding box. Then the task becomes parallel because each node can be processed separately for intersection.
Here is the wiki for quadtree:
Quadtree Wiki
An octree could be used when in 3D.
It actually doesn't even have to be a octree. You could get the same results with any space partition. You could find the maximum separation of polys (lets call it S). And create say S/10 space partitions. Then you would have 10 separate spaces to execute in parallel. Not only would it be concurrent, but It would no longer be M * N time since not every poly must be compared against every other poly.
I read the approach given by Wikipedia to print the shortes path b/w two given points in a graph by modifying Floyd Warshall algorithm . I coded this, but its not really giving the expected output :
Initialize all the elements in minimumDistanceMatrix[i][j] to respective weights in the graph and all the elements in the matrix shortestPathCalculatorMatrix [i][j] to -1.
Then :
// Find shortest path using Floyd–Warshall algorithm
for ( unsigned int k = 0 ; k < getTotalNumberOfCities() ; ++ k)
for ( unsigned int i = 0 ; i < getTotalNumberOfCities() ; ++ i)
for ( unsigned int j = 0 ; j < getTotalNumberOfCities() ; ++ j)
if ( minimumDistanceMatrix[i][k] + minimumDistanceMatrix[k][j] < minimumDistanceMatrix[i][j] )
{
minimumDistanceMatrix[i][j] = minimumDistanceMatrix[i][k] + minimumDistanceMatrix[k][j];
shortestPathCalculatorMatrix [i][j] = k;
}
Then :
void CitiesMap::findShortestPathListBetween(int source , int destination)
{
if( source == destination || source < 0 || destination < 0)
return;
if( INFINITY == getShortestPathBetween(source,destination) )
return ;
int intermediate = shortestPathCalculatorMatrix[source][destination];
if( -1 == intermediate )
{
pathCityList.push_back( destination );
return ;
}
else
{
findShortestPathListBetween( source, intermediate ) ;
pathCityList.push_back(intermediate);
findShortestPathListBetween( intermediate, destination ) ;
return ;
}
}
P.S: pathCityList is a vector which is assumed to be empty before a call to findShortestPathListBetween is made. At the end of this call, this vector has all the intermediate nodes in it.
Can someone point out where I might be going wrong ?
It’s much easier (and more direct) not to iterate over indices but over vertices. Furthermore, each predecessor (usually denoted π, not next), needs to point to its, well, predecessor, not the current temporary vertex.
Given a |V|×|V| adjacency matrix dist for the distances, initialised to infinity, and a |V|×|V| adjacency matrix next to with pointers to vertices,
for each vertex v
dist[v, v] ← 0
for each edge (u,v)
dist[u, v] ← w(u,v) // the weight of the edge (u,v)
next[u, v] ← u
for each vertex k
for each vertex i
for each vertex j
if dist[i, k] + dist[k, j] < dist[i, j] then
dist[i, j] ← dist[i, k] + dist[k, j]
next[i, j] ← next[k, j]
Note that I’ve changed the three nested loops to iterate over vertices not indices, and I’ve fixed the last line to refer to the previous node rather than any intermediate node.
An implementation of the above which looks almost like the pseudocode can be found, for instance, in scipy.sparse.csgraph.
Path reconstruction starts at the end (j in the code below) and jumps to the predecessor of j (at next[i, j]) until it reaches i.
function path(i, j)
if i = j then
write(i)
else if next[i, j] = NIL then
write("no path exists")
else
path(i, next[i, j])
write(j)
A bit late but the above code is flawed .... it should not be next[i][j]=next[k][j] but the correct code for finding this is next[i][j]=next[i][k]
Try it yourself on a sample input and you will get to know why this works and why the previous one is wrong
Here is the below implementation. Why don't try a problem on it! Enjoy CP!!
// g[][] is the graph
// next[i][j] stores vertex next to i in the shortest path between (i,j)
for(int i=1;i<=n;i++){
for(int j=1;j<=n;j++){
if(g[i][j]==0)g[i][j]=inf; // there is no edge b/w (i,j)
else next[i][j]=j; // if there is an edge b/w i and j then j should be next to i
}
}
for(int k=1;k<=n;k++){
for(int i=1;i<=n;i++){
for(int j=1;j<=n;j++){
if(g[i][j]>g[i][k]+g[k][j]){
g[i][j]=g[i][k]+g[k][j];
next[i][j]=next[i][k]; // we found a vertx k next to i which further decrease the shortest path b/w (i,j) so updated it
}
}
}
}
// for printing path
for(int i=1;i<=n;i++){
for(int j=i+1;j<=n;j++){
int u=i,v=j;
print(u+" ");
while(u!=v){
u=next[u][v];
print(u+" ");
}
print("\n");
}
}