I have Depth first searching algorithm whose pseudo code is given below:
DFS(Vertex v)
mark v visited
make an empty Stack S
push all vertices adjacent to v onto S
while S is not empty do
Vertex w is pop off S
for all Vertex u adjacent to w do
if u is not visited then
mark u visited
push u onto S
Now, I wish to convert the above dfs algorithm to breadth first search. I am implementing the program in C++. I am clueless how to go about the same.
EDIT: I know the pseudo code of bfs. What i am searching for is how to convert the above pseudo code of dfs to bfs.
BFS(Vertex v)
mark v visited
make an empty Queue Q
Enqueue all vertices adjacent to v onto Q
while Q is not empty do
Vertex w is dequeued from Q
for all Vertex u adjacent to w do
if u is not visited then
mark u visited
enqueue u into Q
I hope this helps
Related
Question
I have a question about this problem:
Given an directed graph that contains N vertices and M edges, please determine that "there is a path from vertex i to vertex j for all 1 <= i, j <= N".
I want to solve for N <= 500, M <= 250000.
I found the naive pathfinding algorithm with dfs but the time complexity is O(N^2 M), so it is very slow.
Please tell me the efficient algorithm to solve it.
Example
For example, if this graph is given:
The answer is NO because there isn't a path from 4 to 1.
The following algorithm can me implemented with O(N+M) complexity.
Take any vertex u. Use flood fill algorithm to reach other vertices. If any vertex is not reachable, return NOK.
Now do the same, but go to the opposite direction. If any vertex is not reachable, return NOK.
Return OK. (Here we know that there is a path from any vertex v to u because of [2], and there is a path from u to any vertex w because of [1].)
I read the pseudocode of the floyd warshall algorithm
1 let dist be a |V| × |V| array of minimum distances initialized to ∞ (infinity)
2 for each vertex v
3 dist[v][v] ← 0
4 for each edge (u,v)
5 dist[u][v] ← w(u,v) // the weight of the edge (u,v)
6 for k from 1 to |V|
7 for i from 1 to |V|
8 for j from 1 to |V|
9 if dist[i][j] > dist[i][k] + dist[k][j]
10 dist[i][j] ← dist[i][k] + dist[k][j]
11 end if
But it just uses one dist matrix to save distances.
I think there should be n dist matrixes, where n is the number of vertexes,
Or at least we need two dist matrixes.
one stores the current shortest path within vertexes k-1,
the other stores the shortest path within vertexes k,
then the first one stores shortest path within k+1,
....
How can we just store the new shortest path distances within vertexes k in original matrix for distances within vertexes k-1?
this picture shows we need D0, D1, D2....D(n)
You're right in the sense that the original formula requires that calculations for step k needs to use calculations from step k-1:
That can be organized easily if, as you say, first matrix is used to store values from step k-1 second is used to store values from k, the first one is used again to store values from k+1 etc.
But, if we use the same matrix when updating values, in the above formula we might accidentally use instead of if value for index i,k has already been updated during the current round k, or we might get instead of if value for index k,j has been updated. Won't that be a violation of the algorithm, as we are using the wrong recursive update formula?
Well, not really. Remember, Floyd-Warshall algorithm deals with "no negative cycles" constraint, which means that there is no cycle with edges that sum to a negative value. This means that for any k the shortest path from node k to node k is 0 (otherwise it would mean that there is a path from k to k with edges that sum to a negative value). So by definition:
Now, let's just take the first formula and replace j with k:
And then let's replace in the same formula 'i' with 'k':
So, basically, will have the same value as and will have the same value as , so it doesn't really matter whether these values are updated or not during the cycle 'k' and so you can update the same matrix while reading it without breaking the algorithm.
You are partially correct here.
The output of Floyd Warshall Algorithm(i.e the NxN matrix) DOESN'T help to reconstruct the actual shortest path between any two given vertices.
These paths can be recovered if we retain a parent matrix P, such that it stores the last intermediate vertex used for each vertex pair (x, y). Say this value is k.
The shortest path from x to y is the concatenation of the shortest path from x to k with the shortest path from k to y, which can be reconstructed recursively given the matrix P.
Note,however, that most all-pairs applications need only the resulting distance matrix.These jobs are what Floyd’s algorithm was designed for.
I have some degenerate tree (it looks like as array or doubly linked list). For example, it is this tree:
Each edge has some weight. I want to find all equal paths, which starts in each vertex.
In other words, I want to get all tuples (v1, v, v2) where v1 and v2 are an arbitrary ancestor and descendant such that c(v1, v) = c(v, v2).
Let edges have the following weights (it is just example):
a-b = 3
b-c = 1
c-d = 1
d-e = 1
Then:
The vertex A does not have any equal path (there is no vertex from left side).
The vertex B has one equal pair. The path B-A equals to the path B-E (3 == 3).
The vertex C has one equal pair. The path B-C equals to the path C-D (1 == 1).
The vertex D has one equal pair. The path C-D equals to the path D-E (1 == 1).
The vertex E does not have any equal path (there is no vertex from right side).
I implement simple algorithm, which works in O(n^2). But it is too slow for me.
You write, in comments, that your current approach is
It seems, I looking for a way to decrease constant in O(n^2). I choose
some vertex. Then I create two set. Then I fill these sets with
partial sums, while iterating from this vertex to start of tree and to
finish of tree. Then I find set intersection and get number of paths
from this vertex. Then I repeat algorithm for all other vertices.
There is a simpler and, I think, faster O(n^2) approach, based on the so called two pointers method.
For each vertix v go at the same time into two possible directions. Have one "pointer" to a vertex (vl) moving in one direction and another (vr) into another direction, and try to keep the distance from v to vl as close to the distance from v to vr as possible. Each time these distances become equal, you have equal paths.
for v in vertices
vl = prev(v)
vr = next(v)
while (vl is still inside the tree)
and (vr is still inside the tree)
if dist(v,vl) < dist(v,vr)
vl = prev(vl)
else if dist(v,vr) < dist(v,vl)
vr = next(vr)
else // dist(v,vr) == dist(v,vl)
ans = ans + 1
vl = prev(vl)
vr = next(vr)
(By precalculating the prefix sums, you can find dist in O(1).)
It's easy to see that no equal pair will be missed provided that you do not have zero-length edges.
Regarding a faster solution, if you want to list all pairs, then you can't do it faster, because the number of pairs will be O(n^2) in the worst case. But if you need only the amount of these pairs, there might exist faster algorithms.
UPD: I came up with another algorithm for calculating the amount, which might be faster in case your edges are rather short. If you denote the total length of your chain (sum of all edges weight) as L, then the algorithm runs in O(L log L). However, it is much more advanced conceptually and more advanced in coding too.
Firstly some theoretical reasoning. Consider some vertex v. Let us have two arrays, a and b, not the C-style zero-indexed arrays, but arrays with indexation from -L to L.
Let us define
for i>0, a[i]=1 iff to the right of v on the distance exactly i there
is a vertex, otherwise a[i]=0
for i=0, a[i]≡a[0]=1
for i<0, a[i]=1 iff to the left of v on the distance exactly -i there is a vertex, otherwise a[i]=0
A simple understanding of this array is as follows. Stretch your graph and lay it along the coordinate axis so that each edge has the length equal to its weight, and that vertex v lies in the origin. Then a[i]=1 iff there is a vertex at coordinate i.
For your example and for vertex "b" chosen as v:
a--------b--c--d--e
--|--|--|--|--|--|--|--|--|-->
-4 -3 -2 -1 0 1 2 3 4
a: ... 0 1 0 0 1 1 1 1 0 ...
For another array, array b, we define the values in a symmetrical way with respect to origin, as if we have inverted the direction of the axis:
for i>0, b[i]=1 iff to the left of v on the distance exactly i there
is a vertex, otherwise b[i]=0
for i=0, b[i]≡b[0]=1
for i<0, b[i]=1 iff to the right of v on the distance exactly -i there is a vertex, otherwise b[i]=0
Now consider a third array c such that c[i]=a[i]*b[i], asterisk here stays for ordinary multiplication. Obviously c[i]=1 iff the path of length abs(i) to the left ends in a vertex, and the path of length abs(i) to the right ends in a vertex. So for i>0 each position in c that has c[i]=1 corresponds to the path you need. There are also negative positions (c[i]=1 with i<0), which just reflect the positive positions, and one more position where c[i]=1, namely position i=0.
Calculate the sum of all elements in c. This sum will be sum(c)=2P+1, where P is the total number of paths which you need with v being its center. So if you know sum(c), you can easily determine P.
Let us now consider more closely arrays a and b and how do they change when we change the vertex v. Let us denote v0 the leftmost vertex (the root of your tree) and a0 and b0 the corresponding a and b arrays for that vertex.
For arbitrary vertex v denote d=dist(v0,v). Then it is easy to see that for vertex v the arrays a and b are just arrays a0 and b0 shifted by d:
a[i]=a0[i+d]
b[i]=b0[i-d]
It is obvious if you remember the picture with the tree stretched along a coordinate axis.
Now let us consider one more array, S (one array for all vertices), and for each vertex v let us put the value of sum(c) into the S[d] element (d and c depend on v).
More precisely, let us define array S so that for each d
S[d] = sum_over_i(a0[i+d]*b0[i-d])
Once we know the S array, we can iterate over vertices and for each vertex v obtain its sum(c) simply as S[d] with d=dist(v,v0), because for each vertex v we have sum(c)=sum(a0[i+d]*b0[i-d]).
But the formula for S is very simple: S is just the convolution of the a0 and b0 sequences. (The formula does not exactly follow the definition, but is easy to modify to the exact definition form.)
So what we now need is given a0 and b0 (which we can calculate in O(L) time and space), calculate the S array. After this, we can iterate over S array and simply extract the numbers of paths from S[d]=2P+1.
Direct application of the formula above is O(L^2). However, the convolution of two sequences can be calculated in O(L log L) by applying the Fast Fourier transform algorithm. Moreover, you can apply a similar Number theoretic transform (don't know whether there is a better link) to work with integers only and avoid precision problems.
So the general outline of the algorithm becomes
calculate a0 and b0 // O(L)
calculate S = corrected_convolution(a0, b0) // O(L log L)
v0 = leftmost vertex (root)
for v in vertices:
d = dist(v0, v)
ans = ans + (S[d]-1)/2
(I call it corrected_convolution because S is not exactly a convolution, but a very similar object for which a similar algorithm can be applied. Moreover, you can even define S'[2*d]=S[d]=sum(a0[i+d]*b0[i-d])=sum(a0[i]*b0[i-2*d]), and then S' is the convolution proper.)
So I was given this psuedocode for Prims-Algorithm,
INPUT: GRAPH G = (V,E)
OUTPUT: Minimum spanning tree of G
Select arbitrary vertex s that exists within V
Construct an empty tree mst
Construct an empty priority queue Q that contain nodes ordered by their “distance” from mst
Insert s into Q with priority 0
while there exists a vertex v such that v exists in V and v does not exist in mst do
let v = Q.findMin()
Q.removeMin()
for vertex u that exists in neighbors(v) do
if v does not exist in mst then
if weight(u, v) < Q.getPriority(u) then
//TODO: What goes here?
end if
end if
end for
end while
return mst
What goes in the //TODO
TODO is
Q.setPriority(u) = weight(u, v);
besides, your queue don't work well. The priority of a node except s should be initialize as ∞.
as psuedocode, I rewrited it below:
MST-PRIM(G,w,s)
for each u in G.V
u.priority = ∞
u.p = NULL //u's parent in MST
s.key = 0
Q = G.V // Q is a priority queue
while(Q!=∅)
u = EXTRACT-MIN(Q)
for each v in u's adjacent vertex
if v∈Q and w(u,v) < v.priority
v.p = u
v.priority = w(u,v)
You can find its prototype in chapter23.2 of Introduce to Algorithm.
I have simple matrix (matrix, which represents terrain map in 2d game, contains ASCII characters for example 'm' for mountain, 'v' for valley, 'r' for river) and on map there maybe one or none river. River can flow from any position from matrix to any ( and always separate map on two distinct parts => no source of river on map possible, always enter at one end and exists on another). How to separate matrix/terrain map on two clusters if there is river present ?
example terrain
v v v v v v v v r v v v v v
v v v v v m m m r m m m m m
v v v v v m m r r m m m m m
m m v m m m m r r m m m v v
v v v v v v r r v v v v v v
here I should get left cluster and right cluster of coordinates which are not river.
You should try looking up the Fill algorithm.
http://en.wikipedia.org/wiki/Flood_fill
Basically you want to pick a point that it's not in a river, start the flood fill algorithm which will give you a set of points connected to the starting point. This way now you have one part and finding the one is pretty easy from now on.
Your map induces a graph:
there's one vertex for each map cell
two vertices are connected if they are adjacent and none of them is an 'r'
Once the graph is constructed, you can run a graph traversal algorithm like breadth-first search (BFS) or depth-first search (DFS) to find the connected components of the graph.
I'd recommend using BFS, because if the map is large then DFS might get you into a stack overflow (if its recursive implementation is used).
You'll want to run the BFS only on non-'r' nodes, so that in the end you'll end up with two connected components.