When adding a vertex to a weighted undirected graph, which weight stays? - c++

I'm doing an adjacency list implementation of a graph class in C++ (but that's kinda irrelevant). I have a weighted directionless graph and I am writing the addVertex method. I have basically everything down, but I'm not sure what I should do when I add a vertex in between two others that were already there and had their own corresponding weights.
Should I just throw away the old weight that the vertex stored? Should I use the new one that was passed in? A mix of both? Or does it not matter at all what I pick?
I just wanted to make sure that I was not doing something I shouldn't.
Thanks!

I guess it depends on what you want to achieve. Usually, an adjacency list is a nested list whereby each row i indicates the i-th node's neighbourhood. To be precise, each entry in the i-th node's neighbourhood represents an outgoing connection from node i to j. The adjacency list does not comprise edge or arc weights.
Hence, adding a vertex n should do not affect the existing adjacency list's entries but adds a new empty row n to the adjacency list. However, adding or removing edges alter the adjacency list's entries. Thus, adding a vertex n "between two other [nodes i and j] that were already there and had their own corresponding weights" implies that you remove the existing connection between i and j and eventually add two new connections (i,n) and (n,j). If there are no capacity restrictions on the edges and the sum of distances (i,n) and (n,j) dominates the distance (I,j) this could be fine. However, if the weights represent capacities (e.g. max-flow problem) you should keep both connections.
So your question seems to be incomplete or at least unprecise. I assume that your goal is to calculate the shortest distances between each pair of nodes within an undirected graph. I suggest keeping all the different connections in your graph. Shortest path algorithms can calculate the shortest connections between each node pair after you have finished your graph's creation.

Related

The number of undirected graphs reprezentate în the same matrix

I have in the same adiacence matrix for many undirected graphs. I need a idea to find the number of graphs from the matrix.
I have to dolce this in c/c++.
I have in the same adjacency matrix many undirected graphs. I need an idea to find the number of graphs from the matrix.
This means that you have many unconnected components in the matrix (i.e. graphs that do not connect to each other).
take a random node and follow its edges, marking each node visited. If you cannot continue, you have found one component.
take a random node that has not yet been visited and do the same.
repeat until all nodes are marked.
For the marking, take an integer that you increment with each component marked. That allows you to identify (and list) the components and at the same time tell how many components there are.

Find depth of all nodes in a graph stored in the form of an adjacency list in a single BFS/DFS run

I'm new to graph theory and have been solving a few problems lately.
I was looking for a way to find the depth of every node in the graph. Here's how I formalized it on paper:
Run a for loop iterating over every vertex
If the vertex is not visited, run a DFS with that vertex as a source
In the DFS, as long as we have more vertices to go to, we keep going (as in the Depth First Search) and we keep a counter, cnt1, which increments every time
While backtracking in the recursive DFS call, we initialize a new counter starting from the current count at the last vertex, and give the value cnt1-cnt2 to each vertex so on and keep decreasing cnt2.
I'm not sure if this is correct and am not able to implement the same in code. Any suggestions? For the record, my graph is stored in the form of an adjacency list, in the form of an array of vectors like:
vector <int> a[100];
EDIT:The input graph is a collection of directed trees. We have a depth label for each node - denoting the number of nodes on the simple path from the root to it. Hence we need to find the maximum depth of each node
You may find these links helpful:
http://www.geeksforgeeks.org/breadth-first-traversal-for-a-graph/
http://www.geeksforgeeks.org/depth-first-traversal-for-a-graph/
Here classes are used to implement BFS/DFS using an adjacency list representation.Just like the array 'visited' of bool type used here...you have to also create another array 'depth' in wich you can store the depth of each element while computation..and then output that array in the end..

Given a set of vertices, how do you generate a strongly-connected directed graph with a near-minimal amount of edges?

I am trying to perform testing on my graph class's dijkstras algorithm. To do this, I generate a graph with a couple thousand vertices, and then make the graph connected by randomly adding thousands of edges until the graph is connected. I can then run the search between any two random vertices over and over and be sure that there is a path between them. The problem is, I often end-up with a nearly-dense graph, which because I am using an adjacency list representation, causes my search algorithm to be terribly slow.
Question :
Given a set of vertices V, how do you generate a strongly-connected, directed graph, that has significantly less edges than a dense-graph over the same vertices would have?
I was thinking about simply doing the following :
vertex 1 <--> vertex 2, vertex 2 <--> vertex 3, ..., vertex n-1 <--> vertex n
And then randomly adding like n/10 edges throughout the graph, but this doesn't seem like an optimal way of coming up with random graph structures to test my search algorithms on.
One approach would be to maintain a set of strongly connected components (starting with |V| single-vertex components), and in each iteration, merge some random subset of them into a single connected component by connecting a random vertex of each one to a random vertex of the next one, forming a cycle.
This will tend to generate very sparse graphs, so depending on your use case, you might want to toss in some extra random edges as well.
EDIT: Intuitively I think you'd want to use an exponential distribution when deciding how many components to merge in a single iteration. I don't have any real support for that, though.
I don't know if there is a better way of doing it, but at least this seems to work:
I would add E (directed) edges between random vertices. This will generate several clusters of vertices.
Then, I need to connect those clusters to form a chain of clusters, so ensuring that from a cluster I can reach any other cluster. For this I can label a random vertex of each cluster as the "master"vertex, and join the master vertices forming a loop. Thus, you have a strongly-connected directed graph composed of clusters (not vertices yet). The last master should be connected back to the first master, thus creating a loop.
Now, in order to turn it into a strongly-connected digraph composed of vertices I need to make each cluster a strongly-connected digraph by itself. But this is easy if I run a DFS starting at the master node of a cluster and each time I find a leaf I add an edge from that leaf to its master vertex. Note that the DFS must not traverse outside of the cluster.
I think this may work, though the topology will not be truly random, it will loop like a big loop composed of smaller graphs joined together. But depending on the algorithm you need to test, this may come in handy.
EDIT:
If after that you want to have a more random topology, you can add random edges between vertices of different clusters. That doesn't invalidate the rules and creates more complex paths for your algorithm to traverse.

Graph theory: Breadth First Search

There are n vertices connected by m edges. Some of the vertices are special and others are ordinary. There is atmost one path to move from one vertex to another.
First Query:
I need to find out how many pairs of special vertices exists which are connected directly or indirectly.
My approach:
I"ll apply BFS (via queue )to see how many nodes are connected to each other somehow. Let number of special vertices I discover in this be n, then answer to my query would be nC2. I'll repeat this till all vertices are visited.
Second Query:
How many vertices lie on path between any two special vertices.
My approach:
In my approach for query 1, I'll apply BFS to find out path between any two special vertices and then backtrack and mark the vertices lying on the path.
Problem:
Number of vertices can be as high as 50,000. So, applying BFS and then I guess, backtracking would be slower for my time constraint (2 seconds).
I have list of all vertices and their adjacency list. Now while pushing vertices in my queue while BFS, can I somehow calculate answer to query 2 also? Is there a better approach one can use to solve the problem? Input format will be such that I'll be told whether a vertex is special or not one by one and then I'll be given info about i th pathway which connects two vertices.There is atmost one path to move from one vertex to another.
The first query is solved by splitting your forest in trees.
Starting with the full set of vertices, pick the one, then visit every node you can from there, until you cannot visit any more vertex. This is one tree. Repeat for each tree.
You now have K bags of vertices, each containing 0-j special ones. That answers the first question.
For the second question, I suppose a trivial solution is indeed to BFS the path between a vertex to another for each pair in their sub-graph.
You could also take advantage of the tree nature of your sub-graph. This question: How to find the shortest simple path in a Tree in a linear time? mentions it. (I have not really dug into this yet, though)
For the first query, one round of BFS and some simple calculation as you have described is optimal.
For the second query, assuming the worst case where all vertices are special and the graph is a tree, doing a BFS per query is going to give O(Q|V|) complexity, where Q is the number of queries. You are going to run into trouble if Q is larger than 104 and |V| is also larger than 104.
In the worst case, we are basically solving the all pairs shortest path problem, but on a tree/forest. When |V| is small, we can do BFS on all nodes, which results in O(|V|2) algorithm. However, there is a faster algorithm:
Read all second type queries and store all the pairs in a set S.
For each tree in the forest:
Choose a root node for the current tree. Calculate the distance from the root node to all the other nodes in the current tree (regardless it is special or not).
Calculating lowest common ancestor (LCA) for all pairs of nodes being queried (which are stored in set S). This can be done with Tarjan's offline LCA algorithm.
Calculate the distance between a pair of node by: dist(root, a) + dist(root, b) - dist(root, lca(a,b))
Let the arr be bool array where arr[i] is 1 if it is special and 0 otherwise.
find-set(i) returns the root node of the tree. So any nodes lying in the same tree returns the same number.
for(int i=1; i<n; i++){
for(int j=i+1; j<=n; j++){
if(arr[i]==1 && arr[j]==1){ //If both are special
if(find-set(i)==find-set(j)){ //and both i and j belong to the same tree
//k++ where k is answer to the first query.
//bfs(i,j) and find the intermediate vertices and do ver[i]=1 for the corresponding intermediate vertex/node.
}
}
}
}
finally count no of 1's in ver matrix which is the answer to the second query.

Efficiently expand set of graph edges

I have a set of edges from a graph, and would like to expand it with all edges that share a vertex with any edge. How could I do this efficiently with boost::graphs?
The only way I've been able to come up with is the naive solution of extracting all the source and target vertices, using boost::adjacent_vertices to get all adjacencies and then creating all the new edges with boost::edge. Is there a better way of doing this?
Context: The graph vertices are centroids of a terrain triangulation, the edges connect vertices whose corresponding triangles are adjacent (so sort of a dual graph). The set of edges I'm looking to expand corresponds to inter-triangle paths which are blocked, and the blocked area is expanding. The area is sort-of circular, so most of the edges I'll see using my naive approach above will already be part of the set.
Instead of considering all adjacent vertices in each step to generate the new edges, use a property map to mark edges already encounterd. Thus, you need to consider unmarked edges in each step only. A vertex is marked after adding all edges incident to it to your set.
Given the fact that the internal data structure used by boost::graph is either an adjacency list or an adjacency matrix, I do not think that any further improvement is possible.