I am solving a problem on directed acyclic graph.
But I am having trouble testing my code on some directed acyclic graphs. The test graphs should be large, and (obviously) acyclic.
I tried a lot to write code for generating acyclic directed graphs. But I failed every time.
Is there some existing method to generate acyclic directed graphs I could use?
I cooked up a C program that does this. The key is to 'rank' the nodes, and only draw edges from lower ranked nodes to higher ranked ones.
The program I wrote prints in the DOT language.
Here is the code itself, with comments explaining what it means:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define MIN_PER_RANK 1 /* Nodes/Rank: How 'fat' the DAG should be. */
#define MAX_PER_RANK 5
#define MIN_RANKS 3 /* Ranks: How 'tall' the DAG should be. */
#define MAX_RANKS 5
#define PERCENT 30 /* Chance of having an Edge. */
int main (void)
{
int i, j, k,nodes = 0;
srand (time (NULL));
int ranks = MIN_RANKS
+ (rand () % (MAX_RANKS - MIN_RANKS + 1));
printf ("digraph {\n");
for (i = 0; i < ranks; i++)
{
/* New nodes of 'higher' rank than all nodes generated till now. */
int new_nodes = MIN_PER_RANK
+ (rand () % (MAX_PER_RANK - MIN_PER_RANK + 1));
/* Edges from old nodes ('nodes') to new ones ('new_nodes'). */
for (j = 0; j < nodes; j++)
for (k = 0; k < new_nodes; k++)
if ( (rand () % 100) < PERCENT)
printf (" %d -> %d;\n", j, k + nodes); /* An Edge. */
nodes += new_nodes; /* Accumulate into old node set. */
}
printf ("}\n");
return 0;
}
And here is the graph generated from a test run:
The answer to https://mathematica.stackexchange.com/questions/608/how-to-generate-random-directed-acyclic-graphs applies: if you have a adjacency matrix representation of the edges of your graph, then if the matrix is lower triangular, it's a DAG by necessity.
A similar approach would be to take an arbitrary ordering of your nodes, and then consider edges from node x to y only when x < y. That constraint should also get your DAGness by construction. Memory comparison would be one arbitrary way to order your nodes if you're using structs to represent nodes.
Basically, the pseudocode would be something like:
for(i = 0; i < N; i++) {
for (j = i+1; j < N; j++) {
maybePutAnEdgeBetween(i, j);
}
}
where N is the number of nodes in your graph.
The pseudocode suggests that the number of potential DAGs, given N nodes, is
2^(n*(n-1)/2),
since there are
n*(n-1)/2
ordered pairs ("N choose 2"), and we can choose either to have the edge between them or not.
So, to try to put all these reasonable answers together:
(In the following, I used V for the number of vertices in the generated graph, and E for the number of edges, and we assume that E ≤ V(V-1)/2.)
Personally, I think the most useful answer is in a comment, by Flavius, who points at the code at http://condor.depaul.edu/rjohnson/source/graph_ge.c. That code is really simple, and it's conveniently described by a comment, which I reproduce:
To generate a directed acyclic graph, we first
generate a random permutation dag[0],...,dag[v-1].
(v = number of vertices.)
This random permutation serves as a topological
sort of the graph. We then generate random edges of the
form (dag[i],dag[j]) with i < j.
In fact, what the code does is generate the request number of edges by repeatedly doing the following:
generate two numbers in the range [0, V);
reject them if they're equal;
swap them if the first is larger;
reject them if it has generated them before.
The problem with this solution is that as E gets closes to the maximum number of edges V(V-1)/2, then the algorithm becomes slower and slower, because it has to reject more and more edges. A better solution would be to make a vector of all V(V-1)/2 possible edges; randomly shuffle it; and select the first (requested edges) edges in the shuffled list.
The reservoir sampling algorithm lets us do this in space O(E), since we can deduce the endpoints of the kth edge from the value of k. Consequently, we don't actually have to create the source vector. However, it still requires O(V2) time.
Alternatively, one can do a Fisher-Yates shuffle (or Knuth shuffle, if you prefer), stopping after E iterations. In the version of the FY shuffle presented in Wikipedia, this will produce the trailing entries, but the algorithm works just as well backwards:
// At the end of this snippet, a consists of a random sample of the
// integers in the half-open range [0, V(V-1)/2). (They still need to be
// converted to pairs of endpoints).
vector<int> a;
int N = V * (V - 1) / 2;
for (int i = 0; i < N; ++i) a.push_back(i);
for (int i = 0; i < E; ++i) {
int j = i + rand(N - i);
swap(a[i], a[j]);
a.resize(E);
This requires only O(E) time but it requires O(N2) space. In fact, this can be improved to O(E) space with some trickery, but an SO code snippet is too small to contain the result, so I'll provide a simpler one in O(E) space and O(E log E) time. I assume that there is a class DAG with at least:
class DAG {
// Construct an empty DAG with v vertices
explicit DAG(int v);
// Add the directed edge i->j, where 0 <= i, j < v
void add(int i, int j);
};
Now here goes:
// Return a randomly-constructed DAG with V vertices and and E edges.
// It's required that 0 < E < V(V-1)/2.
template<typename PRNG>
DAG RandomDAG(int V, int E, PRNG& prng) {
using dist = std::uniform_int_distribution<int>;
// Make a random sample of size E
std::vector<int> sample;
sample.reserve(E);
int N = V * (V - 1) / 2;
dist d(0, N - E); // uniform_int_distribution is closed range
// Random vector of integers in [0, N-E]
for (int i = 0; i < E; ++i) sample.push_back(dist(prng));
// Sort them, and make them unique
std::sort(sample.begin(), sample.end());
for (int i = 1; i < E; ++i) sample[i] += i;
// Now it's a unique sorted list of integers in [0, N-E+E-1]
// Randomly shuffle the endpoints, so the topological sort
// is different, too.
std::vector<int> endpoints;
endpoints.reserve(V);
for (i = 0; i < V; ++i) endpoints.push_back(i);
std::shuffle(endpoints.begin(), endpoints.end(), prng);
// Finally, create the dag
DAG rv;
for (auto& v : sample) {
int tail = int(0.5 + sqrt((v + 1) * 2));
int head = v - tail * (tail - 1) / 2;
rv.add(head, tail);
}
return rv;
}
You could generate a random directed graph, and then do a depth-first search for cycles. When you find a cycle, break it by deleting an edge.
I think this is worst case O(VE). Each DFS takes O(V), and each one removes at least one edge (so max E)
If you generate the directed graph by uniformly random selecting all V^2 possible edges, and you DFS in random order and delete a random edge - this would give you a uniform distribution (or at least close to it) over all possible dags.
A very simple approach is:
Randomly assign edges by iterating over the indices of a lower diagonal matrix (as suggested by a link above: https://mathematica.stackexchange.com/questions/608/how-to-generate-random-directed-acyclic-graphs)
This will give you a DAG with possibly more than one component. You can use a Disjoint-set data structure to give you the components that can then be merged by creating edges between the components.
Disjoint-sets are described here: https://en.wikipedia.org/wiki/Disjoint-set_data_structure
Edit: I initially found this post while I was working with a scheduling problem named flexible job shop scheduling problem with sequencing flexibility where jobs (the order in which operations are processed) are defined by directed acyclic graphs. The idea was to use an algorithm to generate multiple random directed graphs (jobs) and create instances of the scheduling problem to test my algorithms. The code at the end of this post is a basic version of the one I used to generate the instances. The instance generator can be found here.
I translated to Python and integrated some functionalities to create a transitive set of the random DAG. In this way, the graph generated has the minimum number of edges with the same reachability.
The transitive graph can be visualized at http://dagitty.net/dags.html by pasting the output in Model code (in the right).
Python version of the algorithm
import random
class Graph:
nodes = []
edges = []
removed_edges = []
def remove_edge(self, x, y):
e = (x,y)
try:
self.edges.remove(e)
# print("Removed edge %s" % str(e))
self.removed_edges.append(e)
except:
return
def Nodes(self):
return self.nodes
# Sample data
def __init__(self):
self.nodes = []
self.edges = []
def get_random_dag():
MIN_PER_RANK = 1 # Nodes/Rank: How 'fat' the DAG should be
MAX_PER_RANK = 2
MIN_RANKS = 6 # Ranks: How 'tall' the DAG should be
MAX_RANKS = 10
PERCENT = 0.3 # Chance of having an Edge
nodes = 0
ranks = random.randint(MIN_RANKS, MAX_RANKS)
adjacency = []
for i in range(ranks):
# New nodes of 'higher' rank than all nodes generated till now
new_nodes = random.randint(MIN_PER_RANK, MAX_PER_RANK)
# Edges from old nodes ('nodes') to new ones ('new_nodes')
for j in range(nodes):
for k in range(new_nodes):
if random.random() < PERCENT:
adjacency.append((j, k+nodes))
nodes += new_nodes
# Compute transitive graph
G = Graph()
# Append nodes
for i in range(nodes):
G.nodes.append(i)
# Append adjacencies
for i in range(len(adjacency)):
G.edges.append(adjacency[i])
N = G.Nodes()
for x in N:
for y in N:
for z in N:
if (x, y) != (y, z) and (x, y) != (x, z):
if (x, y) in G.edges and (y, z) in G.edges:
G.remove_edge(x, z)
# Print graph
for i in range(nodes):
print(i)
print()
for value in G.edges:
print(str(value[0]) + ' ' + str(value[1]))
get_random_dag()
Bellow, you may see in the figure the random DAG with many redundant edges generated by the Python code above.
I adapted the code to generate the same graph (same reachability) but with the least possible number of edges. This is also called transitive reduction.
def get_random_dag():
MIN_PER_RANK = 1 # Nodes/Rank: How 'fat' the DAG should be
MAX_PER_RANK = 3
MIN_RANKS = 15 # Ranks: How 'tall' the DAG should be
MAX_RANKS = 20
PERCENT = 0.3 # Chance of having an Edge
nodes = 0
node_counter = 0
ranks = random.randint(MIN_RANKS, MAX_RANKS)
adjacency = []
rank_list = []
for i in range(ranks):
# New nodes of 'higher' rank than all nodes generated till now
new_nodes = random.randint(MIN_PER_RANK, MAX_PER_RANK)
list = []
for j in range(new_nodes):
list.append(node_counter)
node_counter += 1
rank_list.append(list)
print(rank_list)
# Edges from old nodes ('nodes') to new ones ('new_nodes')
if i > 0:
for j in rank_list[i - 1]:
for k in range(new_nodes):
if random.random() < PERCENT:
adjacency.append((j, k+nodes))
nodes += new_nodes
for i in range(nodes):
print(i)
print()
for edge in adjacency:
print(str(edge[0]) + ' ' + str(edge[1]))
print()
print()
Result:
Create a graph with n nodes and an edge between each pair of node n1 and n2 if n1 != n2 and n2 % n1 == 0.
I recently tried re-implementing the accepted answer and found that it is indeterministic. If you don't enforce the min_per_rank parameter, you could end up with a graph with 0 nodes.
To prevent this, I wrapped the for loops in a function and then checked to make sure that, after each rank, that min_per_rank was satisfied. Here's the JavaScript implementation:
https://github.com/karissa/random-dag
And some pseudo-C code that would replace the accepted answer's main loop.
int pushed = 0
int addRank (void)
{
for (j = 0; j < nodes; j++)
for (k = 0; k < new_nodes; k++)
if ( (rand () % 100) < PERCENT)
printf (" %d -> %d;\n", j, k + nodes); /* An Edge. */
if (pushed < min_per_rank) return addRank()
else pushed = 0
return 0
}
Generating a random DAG which might not be connected
Here's an simple algorithm for generating a random DAG that might not be connected.
const randomDAG = (x, n) => {
const length = n * (n - 1) / 2;
const dag = new Array(length);
for (let i = 0; i < length; i++) {
dag[i] = Math.random() < x ? 1 : 0;
}
return dag;
};
const dagIndex = (n, i, j) => n * i + j - (i + 1) * (i + 2) / 2;
const dagToDot = (n, dag) => {
let dot = "digraph {\n";
for (let i = 0; i < n; i++) {
dot += ` ${i};\n`;
for (let j = i + 1; j < n; j++) {
const k = dagIndex(n, i, j);
if (dag[k]) dot += ` ${i} -> ${j};\n`;
}
}
return dot + "}";
};
const randomDot = (x, n) => dagToDot(n, randomDAG(x, n));
new Viz().renderSVGElement(randomDot(0.3, 10)).then(svg => {
document.body.appendChild(svg);
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/viz.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/full.render.js"></script>
If you run this code snippet a couple of times, you might see a DAG which is not connected.
So, how does this code work?
A directed acyclic graph (DAG) is just a topologically sorted undirected graph. An undirected graph of n vertices can have a maximum of n * (n - 1) / 2 edges, not counting repeated edges or edges from a vertex to itself. Now, you can only have an edge from a lower vertex to a higher vertex. Hence, the direction of all the edges are predetermined.
This means that you can represent the entire DAG using a one dimensional array of n * (n - 1) / 2 edge weights. An edge weight of 0 means that the edge is absent. Hence, we just create a random array of zeros or ones, and that's our random DAG.
An edge from vertex i to vertex j in a DAG of n vertices, where i < j, has an edge weight at index k where k = n * i + j - (i + 1) * (i + 2) / 2.
Generating a connected DAG
Once you generate a random DAG, you can check if it's connected using the following function.
const isConnected = (n, dag) => {
const reached = new Array(n).fill(false);
reached[0] = true;
const queue = [0];
while (queue.length > 0) {
const x = queue.shift();
for (let i = 0; i < n; i++) {
if (i === n || reached[i]) continue;
const j = i < x ? dagIndex(n, i, x) : dagIndex(n, x, i);
if (dag[j] === 0) continue;
reached[i] = true;
queue.push(i);
}
}
return reached.every(x => x); // return true if every vertex was reached
};
If it's not connected then its complement will always be connected.
const complement = dag => dag.map(x => x ? 0 : 1);
const randomConnectedDAG = (x, n) => {
const dag = randomDAG(x, n);
return isConnected(n, dag) ? dag : complement(dag);
};
Note that if we create a random DAG with 30% edges then its complement will have 70% edges. Hence, the only safe value for x is 50%. However, if you care about connectivity more than the percentage of edges then this shouldn't be a deal breaker.
Finally, putting it all together.
const randomDAG = (x, n) => {
const length = n * (n - 1) / 2;
const dag = new Array(length);
for (let i = 0; i < length; i++) {
dag[i] = Math.random() < x ? 1 : 0;
}
return dag;
};
const dagIndex = (n, i, j) => n * i + j - (i + 1) * (i + 2) / 2;
const isConnected = (n, dag) => {
const reached = new Array(n).fill(false);
reached[0] = true;
const queue = [0];
while (queue.length > 0) {
const x = queue.shift();
for (let i = 0; i < n; i++) {
if (i === n || reached[i]) continue;
const j = i < x ? dagIndex(n, i, x) : dagIndex(n, x, i);
if (dag[j] === 0) continue;
reached[i] = true;
queue.push(i);
}
}
return reached.every(x => x); // return true if every vertex was reached
};
const complement = dag => dag.map(x => x ? 0 : 1);
const randomConnectedDAG = (x, n) => {
const dag = randomDAG(x, n);
return isConnected(n, dag) ? dag : complement(dag);
};
const dagToDot = (n, dag) => {
let dot = "digraph {\n";
for (let i = 0; i < n; i++) {
dot += ` ${i};\n`;
for (let j = i + 1; j < n; j++) {
const k = dagIndex(n, i, j);
if (dag[k]) dot += ` ${i} -> ${j};\n`;
}
}
return dot + "}";
};
const randomConnectedDot = (x, n) => dagToDot(n, randomConnectedDAG(x, n));
new Viz().renderSVGElement(randomConnectedDot(0.3, 10)).then(svg => {
document.body.appendChild(svg);
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/viz.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/full.render.js"></script>
If you run this code snippet a couple of times, you may see a DAG with a lot more edges than others.
Generating a connected DAG with a certain percentage of edges
If you care about both connectivity and having a certain percentage of edges then you can use the following algorithm.
Start with a fully connected graph.
Randomly remove edges.
After removing an edge, check if the graph is still connected.
If it's no longer connected then add that edge back.
It should be noted that this algorithm is not as efficient as the previous method.
const randomDAG = (x, n) => {
const length = n * (n - 1) / 2;
const dag = new Array(length).fill(1);
for (let i = 0; i < length; i++) {
if (Math.random() < x) continue;
dag[i] = 0;
if (!isConnected(n, dag)) dag[i] = 1;
}
return dag;
};
const dagIndex = (n, i, j) => n * i + j - (i + 1) * (i + 2) / 2;
const isConnected = (n, dag) => {
const reached = new Array(n).fill(false);
reached[0] = true;
const queue = [0];
while (queue.length > 0) {
const x = queue.shift();
for (let i = 0; i < n; i++) {
if (i === n || reached[i]) continue;
const j = i < x ? dagIndex(n, i, x) : dagIndex(n, x, i);
if (dag[j] === 0) continue;
reached[i] = true;
queue.push(i);
}
}
return reached.every(x => x); // return true if every vertex was reached
};
const dagToDot = (n, dag) => {
let dot = "digraph {\n";
for (let i = 0; i < n; i++) {
dot += ` ${i};\n`;
for (let j = i + 1; j < n; j++) {
const k = dagIndex(n, i, j);
if (dag[k]) dot += ` ${i} -> ${j};\n`;
}
}
return dot + "}";
};
const randomDot = (x, n) => dagToDot(n, randomDAG(x, n));
new Viz().renderSVGElement(randomDot(0.3, 10)).then(svg => {
document.body.appendChild(svg);
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/viz.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/viz.js/2.1.2/full.render.js"></script>
Hope that helps.
To test algorithms I generated random graphs based on node layers. This is the Python script (also print the adjacency list). You can change the nodes connection probability percentages or add layers to have a slightly different or "taller" graphs:
# Weighted DAG generator by forward layers
import argparse
import random
parser = argparse.ArgumentParser("dag_gen2")
parser.add_argument(
"--layers",
help="DAG forward layers. Default=5",
type=int,
default=5,
)
args = parser.parse_args()
layers = [[] for _ in range(args.layers)]
edges = {}
node_index = -1
print(f"Creating {len(layers)} layers graph")
# Random horizontal connections -low probability-
def random_horizontal(layer):
for node1 in layer:
# Avoid cycles
for node2 in filter(
lambda n2: node1 != n2 and node1 not in map(lambda el: el[0], edges[n2]),
layer,
):
if random.randint(0, 100) < 10:
w = random.randint(1, 10)
edges[node1].append((node2, w))
# Connect two layers
def connect(layer1, layer2):
random_horizontal(layer1)
for node1 in layer1:
for node2 in layer2:
if random.randint(0, 100) < 30:
w = random.randint(1, 10)
edges[node1].append((node2, w))
# Start nodes 1 to 3
start_nodes = random.randint(1, 3)
start_layer = []
for sn in range(start_nodes + 1):
node_index += 1
start_layer.append(node_index)
# Gen nodes
for layer in layers:
nodes = random.randint(2, 5)
for n in range(nodes):
node_index += 1
layer.append(node_index)
# Connect all
layers.insert(0, start_layer)
for layer in layers:
for node in layer:
edges[node] = []
for i, layer in enumerate(layers[:-1]):
connect(layer, layers[i + 1])
# Print in DOT language
print("digraph {")
for node_key in [node_key for node_key in edges.keys() if len(edges[node_key]) > 0]:
for node_dst, weight in edges[node_key]:
print(f" {node_key} -> {node_dst} [label={weight}];")
print("}")
print("---- Adjacency list ----")
print(edges)
Related
I have a c[N][M] matrix where I apply a max-sum operation over a (K+1)² window. I am trying to reduce the complexity of the naive algorithm.
In particular, here's my code snippet in C++:
<!-- language: cpp -->
int N,M,K;
std::cin >> N >> M >> K;
std::pair< unsigned , unsigned > opt[N][M];
unsigned c[N][M];
// Read values for c[i][j]
// Initialize all opt[i][j] at (0,0).
for ( int i = 0; i < N; i ++ ) {
for ( int j = 0; j < M ; j ++ ) {
unsigned max = 0;
int posX = i, posY = j;
for ( int ii = i; (ii >= i - K) && (ii >= 0); ii -- ) {
for ( int jj = j; (jj >= j - K) && (jj >= 0); jj -- ) {
// Ignore the (i,j) position
if (( ii == i ) && ( jj == j )) {
continue;
}
if ( opt[ii][jj].second > max ) {
max = opt[ii][jj].second;
posX = ii;
posY = jj;
}
}
}
opt[i][j].first = opt[posX][posY].second;
opt[i][j].second = c[i][j] + opt[posX][posY].first;
}
}
The goal of the algorithm is to compute opt[N-1][M-1].
Example: for N = 4, M = 4, K = 2 and:
c[N][M] = 4 1 1 2
6 1 1 1
1 2 5 8
1 1 8 0
... the result should be opt[N-1][M-1] = {14, 11}.
The running complexity of this snippet is however O(N M K²). My goal is to reduce the running time complexity. I have already seen posts like this, but it appears that my "filter" is not separable, probably because of the sum operation.
More information (optional): this is essentially an algorithm which develops the optimal strategy in a "game" where:
Two players lead a single team in a N × M dungeon.
Each position of the dungeon has c[i][j] gold coins.
Starting position: (N-1,M-1) where c[N-1][M-1] = 0.
The active player chooses the next position to move the team to, from position (x,y).
The next position can be any of (x-i, y-j), i <= K, j <= K, i+j > 0. In other words, they can move only left and/or up, up to a step K per direction.
The player who just moved the team gets the coins in the new position.
The active player alternates each turn.
The game ends when the team reaches (0,0).
Optimal strategy for both players: maximize their own sum of gold coins, if they know that the opponent is following the same strategy.
Thus, opt[i][j].first represents the coins of the player who will now move from (i,j) to another position. opt[i][j].second represents the coins of the opponent.
Here is a O(N * M) solution.
Let's fix the lower row(r). If the maximum for all rows between r - K and r is known for every column, this problem can be reduced to a well-known sliding window maximum problem. So it is possible to compute the answer for a fixed row in O(M) time.
Let's iterate over all rows in increasing order. For each column the maximum for all rows between r - K and r is the sliding window maximum problem, too. Processing each column takes O(N) time for all rows.
The total time complexity is O(N * M).
However, there is one issue with this solution: it does not exclude the (i, j) element. It is possible to fix it by running the algorithm described above twice(with K * (K + 1) and (K + 1) * K windows) and then merging the results(a (K + 1) * (K + 1) square without a corner is a union of two rectangles with K * (K + 1) and (K + 1) * K size).
I'm trying to solve the following problem:
A rectangular paper sheet of M*N is to be cut down into squares such that:
The paper is cut along a line that is parallel to one of the sides of the paper.
The paper is cut such that the resultant dimensions are always integers.
The process stops when the paper can't be cut any further.
What is the minimum number of paper pieces cut such that all are squares?
Limits: 1 <= N <= 100 and 1 <= M <= 100.
Example: Let N=1 and M=2, then answer is 2 as the minimum number of squares that can be cut is 2 (the paper is cut horizontally along the smaller side in the middle).
My code:
cin >> n >> m;
int N = min(n,m);
int M = max(n,m);
int ans = 0;
while (N != M) {
ans++;
int x = M - N;
int y = N;
M = max(x, y);
N = min(x, y);
}
if (N == M && M != 0)
ans++;
But I am not getting what's wrong with this approach as it's giving me a wrong answer.
I think both the DP and greedy solutions are not optimal. Here is the counterexample for the DP solution:
Consider the rectangle of size 13 X 11. DP solution gives 8 as the answer. But the optimal solution has only 6 squares.
This thread has many counter examples: https://mathoverflow.net/questions/116382/tiling-a-rectangle-with-the-smallest-number-of-squares
Also, have a look at this for correct solution: http://int-e.eu/~bf3/squares/
I'd write this as a dynamic (recursive) program.
Write a function which tries to split the rectangle at some position. Call the function recursively for both parts. Try all possible splits and take the one with the minimum result.
The base case would be when both sides are equal, i.e. the input is already a square, in which case the result is 1.
function min_squares(m, n):
// base case:
if m == n: return 1
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
return min { min_hor, min_ver }
To improve performance, you can cache the recursive results:
function min_squares(m, n):
// base case:
if m == n: return 1
// check if we already cached this
if cache contains (m, n):
return cache(m, n)
// minimum number of squares if you split vertically:
min_ver := min { min_squares(m, i) + min_squares(m, n-i) | i ∈ [1, n/2] }
// minimum number of squares if you split horizontally:
min_hor := min { min_squares(i, n) + min_squares(m-i, n) | i ∈ [1, m/2] }
// put in cache and return
result := min { min_hor, min_ver }
cache(m, n) := result
return result
In a concrete C++ implementation, you could use int cache[100][100] for the cache data structure since your input size is limited. Put it as a static local variable, so it will automatically be initialized with zeroes. Then interpret 0 as "not cached" (as it can't be the result of any inputs).
Possible C++ implementation: http://ideone.com/HbiFOH
The greedy algorithm is not optimal. On a 6x5 rectangle, it uses a 5x5 square and 5 1x1 squares. The optimal solution uses 2 3x3 squares and 3 2x2 squares.
To get an optimal solution, use dynamic programming. The brute-force recursive solution tries all possible horizontal and vertical first cuts, recursively cutting the two pieces optimally. By caching (memoizing) the value of the function for each input, we get a polynomial-time dynamic program (O(m n max(m, n))).
This problem can be solved using dynamic programming.
Assuming we have a rectangle with width is N and height is M.
if (N == M), so it is a square and nothing need to be done.
Otherwise, we can divide the rectangle into two other smaller one (N - x, M) and (x,M), so it can be solved recursively.
Similarly, we can also divide it into (N , M - x) and (N, x)
Pseudo code:
int[][]dp;
boolean[][]check;
int cutNeeded(int n, int m)
if(n == m)
return 1;
if(check[n][m])
return dp[n][m];
check[n][m] = true;
int result = n*m;
for(int i = 1; i <= n/2; i++)
int tmp = cutNeeded(n - i, m) + cutNeeded(i,m);
result = min(tmp, result);
for(int i = 1; i <= m/2; i++)
int tmp = cutNeeded(n , m - i) + cutNeeded(n,i);
result = min(tmp, result);
return dp[n][m] = result;
Here is a greedy impl. As #David mentioned it is not optimal and is completely wrong some cases so dynamic approach is the best (with caching).
def greedy(m, n):
if m == n:
return 1
if m < n:
m, n = n, m
cuts = 0
while n:
cuts += m/n
m, n = n, m % n
return cuts
print greedy(2, 7)
Here is DP attempt in python
import sys
def cache(f):
db = {}
def wrap(*args):
key = str(args)
if key not in db:
db[key] = f(*args)
return db[key]
return wrap
#cache
def squares(m, n):
if m == n:
return 1
xcuts = sys.maxint
ycuts = sys.maxint
x, y = 1, 1
while x * 2 <= n:
xcuts = min(xcuts, squares(m, x) + squares(m, n - x))
x += 1
while y * 2 <= m:
ycuts = min(ycuts, squares(y, n) + squares(m - y, n))
y += 1
return min(xcuts, ycuts)
This is essentially classic integer or 0-1 knapsack problem that can be solved using greedy or dynamic programming approach. You may refer to: Solving the Integer Knapsack
Let v = {x: x in {-1,0,1}} such that the dimension of|v| = 9
Every element x in vector v can take 3 possible values -1,0 or 1
How can I generate all the possible combinations of vector v ?
Example: v = {1,0,-1,0,0,1,1,1,0}, v = {-1,0,-1,1,0,0,1,1,0} etc...
Will i have 3^9 combinations?
thank you.
If you are using python, you can simply do that:
import itertools
v = itertools.product([-1,0,1], repeat=9)
# v will be a generator
# to have the whole list as tuples
list_v = list(v)
# Verify the number of combination
print len(list(v))
And it gives you: 19683, or 3^9
The idea is this:
Your have a position for each element of v (9 in your case):
- - - - - - - - -
Each position can hold three different values (-1 | 0 | 1) and then the total number of combinations is equal to 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 * 3 = 3^9.
To generate such combinations only simulate this process, for example with for loops, e.g. for three positions:
values[] = {-1, 0, 1};
for (i = 0; i < 3; i++)
for (j = 0; j < 3; j++)
for (k = 0; k < 3; k++)
print values[i], values[j], values[k]
In your case you need nine nested loops! An easier implementation will involve recursion but its sometimes more complicated to understand. Here is the idea anyway:
values[] = {-1, 0, 1};
void generate(int position)
{
if (position == 0) {
println();
return;
}
for (int i = 0; i < 3; i++) {
print(values[i], ", ");
generate(position - 1);
}
}
// call the function with
generate(9);
This another answer explains a little more how a recursive generator works.
I read the approach given by Wikipedia to print the shortes path b/w two given points in a graph by modifying Floyd Warshall algorithm . I coded this, but its not really giving the expected output :
Initialize all the elements in minimumDistanceMatrix[i][j] to respective weights in the graph and all the elements in the matrix shortestPathCalculatorMatrix [i][j] to -1.
Then :
// Find shortest path using Floyd–Warshall algorithm
for ( unsigned int k = 0 ; k < getTotalNumberOfCities() ; ++ k)
for ( unsigned int i = 0 ; i < getTotalNumberOfCities() ; ++ i)
for ( unsigned int j = 0 ; j < getTotalNumberOfCities() ; ++ j)
if ( minimumDistanceMatrix[i][k] + minimumDistanceMatrix[k][j] < minimumDistanceMatrix[i][j] )
{
minimumDistanceMatrix[i][j] = minimumDistanceMatrix[i][k] + minimumDistanceMatrix[k][j];
shortestPathCalculatorMatrix [i][j] = k;
}
Then :
void CitiesMap::findShortestPathListBetween(int source , int destination)
{
if( source == destination || source < 0 || destination < 0)
return;
if( INFINITY == getShortestPathBetween(source,destination) )
return ;
int intermediate = shortestPathCalculatorMatrix[source][destination];
if( -1 == intermediate )
{
pathCityList.push_back( destination );
return ;
}
else
{
findShortestPathListBetween( source, intermediate ) ;
pathCityList.push_back(intermediate);
findShortestPathListBetween( intermediate, destination ) ;
return ;
}
}
P.S: pathCityList is a vector which is assumed to be empty before a call to findShortestPathListBetween is made. At the end of this call, this vector has all the intermediate nodes in it.
Can someone point out where I might be going wrong ?
It’s much easier (and more direct) not to iterate over indices but over vertices. Furthermore, each predecessor (usually denoted π, not next), needs to point to its, well, predecessor, not the current temporary vertex.
Given a |V|×|V| adjacency matrix dist for the distances, initialised to infinity, and a |V|×|V| adjacency matrix next to with pointers to vertices,
for each vertex v
dist[v, v] ← 0
for each edge (u,v)
dist[u, v] ← w(u,v) // the weight of the edge (u,v)
next[u, v] ← u
for each vertex k
for each vertex i
for each vertex j
if dist[i, k] + dist[k, j] < dist[i, j] then
dist[i, j] ← dist[i, k] + dist[k, j]
next[i, j] ← next[k, j]
Note that I’ve changed the three nested loops to iterate over vertices not indices, and I’ve fixed the last line to refer to the previous node rather than any intermediate node.
An implementation of the above which looks almost like the pseudocode can be found, for instance, in scipy.sparse.csgraph.
Path reconstruction starts at the end (j in the code below) and jumps to the predecessor of j (at next[i, j]) until it reaches i.
function path(i, j)
if i = j then
write(i)
else if next[i, j] = NIL then
write("no path exists")
else
path(i, next[i, j])
write(j)
A bit late but the above code is flawed .... it should not be next[i][j]=next[k][j] but the correct code for finding this is next[i][j]=next[i][k]
Try it yourself on a sample input and you will get to know why this works and why the previous one is wrong
Here is the below implementation. Why don't try a problem on it! Enjoy CP!!
// g[][] is the graph
// next[i][j] stores vertex next to i in the shortest path between (i,j)
for(int i=1;i<=n;i++){
for(int j=1;j<=n;j++){
if(g[i][j]==0)g[i][j]=inf; // there is no edge b/w (i,j)
else next[i][j]=j; // if there is an edge b/w i and j then j should be next to i
}
}
for(int k=1;k<=n;k++){
for(int i=1;i<=n;i++){
for(int j=1;j<=n;j++){
if(g[i][j]>g[i][k]+g[k][j]){
g[i][j]=g[i][k]+g[k][j];
next[i][j]=next[i][k]; // we found a vertx k next to i which further decrease the shortest path b/w (i,j) so updated it
}
}
}
}
// for printing path
for(int i=1;i<=n;i++){
for(int j=i+1;j<=n;j++){
int u=i,v=j;
print(u+" ");
while(u!=v){
u=next[u][v];
print(u+" ");
}
print("\n");
}
}
Original question and simple algorithm
Given a set of relations such as
a < c
b < c
b < d < e
what is the most efficient algorithm to find a set of integers starting with 0 (and with as many repeated integers as possible!) that matches the set of relations, i.e. in this case
a = 0; b = 0; c = 1; d = 1; e = 2
The trivial algorithm is to repeatedly iterate over the set of relations and increasing values as necessary until convergence is reached, as implemented below in Python:
relations = [('a', 'c'), ('b', 'c'), ('b', 'd', 'e')]
print(relations)
values = dict.fromkeys(set(sum(relations, ())), 0)
print(values)
converged = False
while not converged:
converged = True
for relation in relations:
for i in range(1,len(relation)):
if values[relation[i]] <= values[relation[i-1]]:
converged = False
values[relation[i]] += values[relation[i-1]]-values[relation[i]]+1
print(values)
Aside from the O(Relations²) complexity (if I'm not mistaken), this algorithm also goes into an infinite loop if an invalid relation is given (such as adding e < d). Detecting such a failure case is not strictly necessary for my use case, but would be a nice bonus.
Python implementation based on Tim Peter's comments
relations = [('a', 'c'), ('b', 'c'), ('b', 'd'), ('b', 'e'), ('d', 'e')]
symbols = set(sum(relations, ()))
numIncoming = dict.fromkeys(symbols, 0)
values = {}
for rel in relations:
numIncoming[rel[1]] += 1
k = 0
n = len(symbols)
c = 0
while k < n:
curs = [sym for sym in symbols if numIncoming[sym] == 0]
curr = [rel for rel in relations if rel[0] in curs]
for sym in curs:
symbols.remove(sym)
values[sym] = c
for rel in curr:
relations.remove(rel)
numIncoming[rel[1]] -= 1
c += 1
k += len(curs)
print(values)
At the moment it requires the relations to be "split" (b < d and d < e instead of b < d < e), but detection of loops is easy (whenever curs is empty and k < n) and it should be possible to implement it somewhat more efficiently (especially how curs and curr are determined)
Worst case timing (1000 elements, 999 relations, reverse order):
Version A: 0.944926519991
Version B: 0.115537379751
Best case timing (1000 elements, 999 relations, forward order):
Version A: 0.00497004507556
Version B: 0.102511841589
Average case timing (1000 elements, 999 relations, random order):
Version A: 0.487685376214
Version B: 0.109792166323
Test data can be generated via
n = 1000
relations_worst = list((a, b) for a, b in zip(range(n)[::-1][1:], range(n)[::-1]))
relations_best = list(relations_worst[::-1])
relations_avg = shuffle(relations_worst)
C++ implementation based on Tim Peter's answer (simplified for symbols [0, n) )
vector<unsigned> chunked_topsort(const vector<vector<unsigned>>& relations, unsigned n)
{
vector<unsigned> ret(n);
vector<set<unsigned>> succs(n);
vector<unsigned> npreds(n);
set<unsigned> allelts;
set<unsigned> nopreds;
for(auto i = n; i--;)
allelts.insert(i);
for(const auto& r : relations)
{
auto u = r[0];
if(npreds[u] == 0) nopreds.insert(u);
for(size_t i = 1; i < r.size(); ++i)
{
auto v = r[i];
if(npreds[v] == 0) nopreds.insert(v);
if(succs[u].count(v) == 0)
{
succs[u].insert(v);
npreds[v] += 1;
nopreds.erase(v);
}
u = v;
}
}
set<unsigned> next;
unsigned chunk = 0;
while(!nopreds.empty())
{
next.clear();
for(const auto& u : nopreds)
{
ret[u] = chunk;
allelts.erase(u);
for(const auto& v : succs[u])
{
npreds[v] -= 1;
if(npreds[v] == 0)
next.insert(v);
}
}
swap(nopreds, next);
++chunk;
}
assert(allelts.empty());
return ret;
}
C++ implementation with improved cache locality
vector<unsigned> chunked_topsort2(const vector<vector<unsigned>>& relations, unsigned n)
{
vector<unsigned> ret(n);
vector<unsigned> npreds(n);
vector<tuple<unsigned, unsigned>> flat_relations; flat_relations.reserve(relations.size());
vector<unsigned> relation_offsets(n+1);
for(const auto& r : relations)
{
if(r.size() < 2) continue;
for(size_t i = 0; i < r.size()-1; ++i)
{
assert(r[i] < n && r[i+1] < n);
flat_relations.emplace_back(r[i], r[i+1]);
relation_offsets[r[i]+1] += 1;
npreds[r[i+1]] += 1;
}
}
partial_sum(relation_offsets.begin(), relation_offsets.end(), relation_offsets.begin());
sort(flat_relations.begin(), flat_relations.end());
vector<unsigned> nopreds;
for(unsigned i = 0; i < n; ++i)
if(npreds[i] == 0)
nopreds.push_back(i);
vector<unsigned> next;
unsigned chunk = 0;
while(!nopreds.empty())
{
next.clear();
for(const auto& u : nopreds)
{
ret[u] = chunk;
for(unsigned i = relation_offsets[u]; i < relation_offsets[u+1]; ++i)
{
auto v = std::get<1>(flat_relations[i]);
npreds[v] -= 1;
if(npreds[v] == 0)
next.push_back(v);
}
}
swap(nopreds, next);
++chunk;
}
assert(all_of(npreds.begin(), npreds.end(), [](unsigned i) { return i == 0; }));
return ret;
}
C++ timings
10000 elements, 9999 relations, averaged over 1000 runs
"Worst case":
chunked_topsort: 4.21345 ms
chunked_topsort2: 1.75062 ms
"Best case":
chunked_topsort: 4.27287 ms
chunked_topsort2: 0.541771 ms
"Average case":
chunked_topsort: 6.44712 ms
chunked_topsort2: 0.955116 ms
Unlike the Python version the C++ chunked_topsort depends significantly on the order of the elements. Interestingly, the random order / average case is by far the slowest (with the set-based chunked_topsort).
Here's an implementation I didn't have time to post before:
def chunked_topsort(relations):
# `relations` is an iterable producing relations.
# A relation is a sequence, interpreted to mean
# relation[0] < relation[1] < relation[2] < ...
# The result is a list such that
# result[i] is the set of elements assigned to i.
from collections import defaultdict
succs = defaultdict(set) # new empty set is default
npreds = defaultdict(int) # 0 is default
allelts = set()
nopreds = set()
def add_elt(u):
allelts.add(u)
if npreds[u] == 0:
nopreds.add(u)
for r in relations:
u = r[0]
add_elt(u)
for i in range(1, len(r)):
v = r[i]
add_elt(v)
if v not in succs[u]:
succs[u].add(v)
npreds[v] += 1
nopreds.discard(v)
u = v
result = []
while nopreds:
result.append(nopreds)
allelts -= nopreds
next_nopreds = set()
for u in nopreds:
for v in succs[u]:
npreds[v] -= 1
assert npreds[v] >= 0
if npreds[v] == 0:
next_nopreds.add(v)
nopreds = next_nopreds
if allelts:
raise ValueError("elements in cycles %s" % allelts)
return result
Then, e.g.,
>>> print chunked_topsort(['ac', 'bc', 'bde', 'be', 'fbcg'])
[set(['a', 'f']), set(['b']), set(['c', 'd']), set(['e', 'g'])]
Hope that helps. Note that there's no searching here of any kind (e.g., no conditional list comprehensions). That makes it theoretically ;-) efficient.
Later: timing
On the test data generated near the end of your post, chunked_topsort() is pretty much insensitive to the ordering of the inputs. That's not really surprising, since the algorithm only iterates over the inputs once to build its (inherently unordered) dicts and sets. In all, it's about 15 to 20 times faster than Version B. Typical timing output from 3 runs:
worst chunked 0.007 B 0.129 B/chunked 19.79
best chunked 0.007 B 0.110 B/chunked 16.85
avg chunked 0.006 B 0.118 B/chunked 19.06
worst chunked 0.007 B 0.127 B/chunked 18.25
best chunked 0.006 B 0.103 B/chunked 17.16
avg chunked 0.006 B 0.119 B/chunked 18.86
worst chunked 0.007 B 0.132 B/chunked 20.20
best chunked 0.007 B 0.105 B/chunked 16.04
avg chunked 0.007 B 0.113 B/chunked 17.32
With simpler data structures
Given that the problem has changed ;-), here's a rewrite that assumes inputs are integers in range(n), and that n is also passed. No sets, no dicts, and no dynamic allocations after the initial pass over the input relations. In Python, this is about 40% faster than chunked_topsort() on the test data. But I'm too old to wrestle with C++ anymore ;-)
def ct_special(relations, n):
# `relations` is an iterable producing relations.
# A relation is a sequence, interpreted to mean
# relation[0] < relation[1] < relation[2] < ...
# All elements are in range(n).
# The result is a vector of length n such that
# result[i] is the ordinal assigned to i, or
# result[i] is -1 if i didn't appear in the relations.
succs = [[] for i in xrange(n)]
npreds = [-1] * n
nopreds = [-1] * n
numnopreds = 0
def add_elt(u):
if not 0 <= u < n:
raise ValueError("element %s out of range" % u)
if npreds[u] < 0:
npreds[u] = 0
for r in relations:
u = r[0]
add_elt(u)
for i in range(1, len(r)):
v = r[i]
add_elt(v)
succs[u].append(v)
npreds[v] += 1
u = v
result = [-1] * n
for u in xrange(n):
if npreds[u] == 0:
nopreds[numnopreds] = u
numnopreds += 1
ordinal = nopreds_start = 0
while nopreds_start < numnopreds:
next_nopreds_start = numnopreds
for i in xrange(nopreds_start, numnopreds):
u = nopreds[i]
result[u] = ordinal
for v in succs[u]:
npreds[v] -= 1
assert npreds[v] >= 0
if npreds[v] == 0:
nopreds[numnopreds] = v
numnopreds += 1
nopreds_start = next_nopreds_start
ordinal += 1
if any(count > 0 for count in npreds):
raise ValueError("elements in cycles")
return result
This is again - in Python - insensitive to input ordering.