I am currently counting the number of paths of length $n$ in a bipartite graph by doing a depth first search (up to 10 levels). However, my implementation of this takes 5+ minutes to count 7 million paths of length 5 from a bipartite graph with 3000+ elements. I am looking for a more efficient way to do this counting problem, and I am wondering if there is any such algorithm in the literature.
These are undirected bipartite graphs, so there can be cycles in the paths.
My goal here is to count the number of paths of length $n$ in a bipartite graph of 1 million elements under a minute.
Thank you in advance for any suggested answers.
I agree with the first idea but it's not quite a BFS. In a BFS you go through each node once, here you can go a large number of times.
You have to keep 2 arrays (let's call it Cnt1, and Cnt2, Cnt1 is the number of times you have reached an element and you have a path of length i, and Cnt2 is the same but for length i + 1). First time all the elements are 0 in Cnt2 and 1 in Cnt1( because you have one path of length zero starting at each node).
Repeat N times:
Go through all the nodes
For the current node you go through all his connected nodes and for each you add at there position on Cnt2 the number of times you reached the current node in Cnt1.
When you finished all the nodes you just Copy Cnt2 in Cnt1 and make Cnt2 zero.
At the end you just add all the numbers of Cnt1 and that is the answer.
Convert to a breadth-first search, and whenever you have 2 paths that lead to the same node at the same length, just keep track of how many such ways there are and not how you got there.
This will avoid a lot of repeated work and should provide a significant speedup. (If n is not small, there are better speedups, read on.)
My goal here is to count the number of paths of length n in a bipartite graph of 1 million elements under a minute.
Um, good luck?
An alternate approach to look into is if you take the adjacency matrix of the graph, and raise it to the n'th power, all of the entries of the matrix you get are the number of paths of length end starting in one place, ending in another. So you can take shortcuts like repeated squaring. Convenient, isn't that?
Unfortunately a million element graph gives rise to an adjacency matrix with 10^12 entries. Multiplying two such matrices with a naive algorithm should require 10^18 operations. Of course we have better matrix multiplication algorithms, but you're still not getting below, say, 10^15 operations. Which will most assuredly not complete in 1 minute. (If your matrix is sparse enough you might have a chance, but you should do some researching on the topic.)
Related
I am looking to generate derangements uniformly at random. In other words: shuffle a vector so that no element stays in its original place.
Requirements:
uniform sampling (each derangement is generated with equal probability)
a practical implementation is faster than the rejection method (i.e. keep generating random permutations until we find a derangement)
None of the answers I found so far are satisfactory in that they either don't sample uniformly (or fail to prove uniformity) or do not make a practical comparison with the rejection method. About 1/e = 37% of permutations are derangements, which gives a clue about what performance one might expect at best relative to the rejection method.
The only reference I found which makes a practical comparison is in this thesis which benchmarks 7.76 s for their proposed algorithm vs 8.25 s for the rejection method (see page 73). That's a speedup by a factor of only 1.06. I am wondering if something significantly better (> 1.5) is possible.
I could implement and verify various algorithms proposed in papers, and benchmark them. Doing this correctly would take quite a bit of time. I am hoping that someone has done it, and can give me a reference.
Here is an idea for an algorithm that may work for you. Generate the derangement in cycle notation. So (1 2) (3 4 5) represents the derangement 2 1 4 5 3. (That is (1 2) is a cycle and so is (3 4 5).)
Put the first element in the first place (in cycle notation you can always do this) and take a random permutation of the rest. Now we just need to find out where the parentheses go for the cycle lengths.
As https://mathoverflow.net/questions/130457/the-distribution-of-cycle-length-in-random-derangement notes, in a permutation, a random cycle is uniformly distributed in length. They are not randomly distributed in derangements. But the number of derangements of length m is m!/e rounded up for even m and down for odd m. So what we can do is pick a length uniformly distributed in the range 2..n and accept it with the probability that the remaining elements would, proceeding randomly, be a derangement. This cycle length will be correctly distributed. And then once we have the first cycle length, we repeat for the next until we are done.
The procedure done the way I described is simpler to implement but mathematically equivalent to taking a random derangement (by rejection), and writing down the first cycle only. Then repeating. It is therefore possible to prove that this produces all derangements with equal probability.
With this approach done naively, we will be taking an average of 3 rolls before accepting a length. However we then cut the problem in half on average. So the number of random numbers we need to generate for placing the parentheses is O(log(n)). Compared with the O(n) random numbers for constructing the permutation, this is a rounding error. However it can be optimized by noting that the highest probability for accepting is 0.5. So if we accept with twice the probability of randomly getting a derangement if we proceeded, our ratios will still be correct and we get rid of most of our rejections of cycle lengths.
If most of the time is spent in the random number generator, for large n this should run at approximately 3x the rate of the rejection method. In practice it won't be as good because switching from one representation to another is not actually free. But you should get speedups of the order of magnitude that you wanted.
this is just an idea but i think it can produce a uniformly distributed derangements.
but you need a helper buffer with max of around N/2 elements where N is the size of the items to be arranged.
first is to choose a random(1,N) position for value 1.
note: 1 to N instead of 0 to N-1 for simplicity.
then for value 2, position will be random(1,N-1) if 1 fall on position 2 and random(1,N-2) otherwise.
the algo will walk the list and count only the not-yet-used position until it reach the chosen random position for value 2, of course the position 2 will be skipped.
for value 3 the algo will check if position 3 is already used. if used, pos3 = random(1,N-2), if not, pos3 = random(1,N-3)
again, the algo will walk the list and count only the not-yet-used position until reach the count=pos3. and then position the value 3 there.
this will goes for the next values until totally placed all the values in positions.
and that will generate a uniform probability derangements.
the optimization will be focused on how the algo will reach pos# fast.
instead of walking the list to count the not-yet-used positions, the algo can used a somewhat heap like searching for the positions not yet used instead of counting and checking positions 1 by 1. or any other methods aside from heap-like searching. this is a separate problem to be solved: how to reached an unused item given it's position-count in a list of unused-items.
I'm curious ... and mathematically uninformed. So I ask innocently, why wouldn't a "simple shuffle" be sufficient?
for i from array_size downto 1: # assume zero-based arrays
j = random(0,i-1)
swap_elements(i,j)
Since the random function will never produce a value equal to i it will never leave an element where it started. Every element will be moved "somewhere else."
Let d(n) be the number of derangements of an array A of length n.
d(n) = (n-1) * (d(n-1) + d(n-2))
The d(n) arrangements are achieved by:
1. First, swapping A[0] with one of the remaining n-1 elements
2. Next, either deranging all n-1 remaning elements, or deranging
the n-2 remaining that excludes the index
that received A[0] from the initial matrix.
How can we generate a derangement uniformly at random?
1. Perform the swap of step 1 above.
2. Randomly decide which path we're taking in step 2,
with probability d(n-1)/(d(n-1)+d(n-2)) of deranging all remaining elements.
3. Recurse down to derangements of size 2-3 which are both precomputed.
Wikipedia has d(n) = floor(n!/e + 0.5) (exactly). You can use this to calculate the probability of step 2 exactly in constant time for small n. For larger n the factorial can be slow, but all you need is the ratio. It's approximately (n-1)/n. You can live with the approximation, or precompute and store the ratios up to the max n you're considering.
Note that (n-1)/n converges very quickly.
The limitations are of 100.000 (10^5) nodes and 2 or less edges per node
How could we get a maximum independent set for this graph in O(n) or O(n log n) time? Otherwise, it goes out on time. By the way, i just need to know the amount of points integrating the set, not necessarily the set of points itself.
I know of the greedy aproximation that works on O(n) which is picking nodes with the lowest number of degree, adding them to our set and then removing all its neighbors, repeating that until the graph is empty, and this aproximation works for many cases. Thing is, with these restrictions, isn't there any algorithm that always work?
On that class of graphs, if you greedily choose the node with the lowest degree and delete it and its neighbors, then you'll get an optimal solution, in linear time.
Problem: We are given two arrays A & B of integers. Now in each step we are allowed to remove any 2 non co-prime integers each from the two arrays. We have to find the maximal number of pairs that can be removed by these steps.
Bounds:
length of A, B <=105
every integer <=109
Dinic's algorithm - O(V2E)
Edmonds-karp algorithm - O(VE2)
Hopcroft–Karp algorithm - O(E sqrt(V))
My approach up till now: This can be modeled as bipartite matching problem with two sets A and B and edges can be created between every non co-prime pair of integers from the corresponding set.
But the problem is that there can be O(V2) edges in the graph and most Bipartite matching and max-flow algorithms will be super slow for such large graphs.
I am looking for some problem specific or mathematical optimization that can solve the problem in reasonable time. To pass the test cases i need at most O(V log V) or O(V sqrt(V)) algorithm.
Thanks in advance.
You could try making a graph with vertices for:
A source
Every element in A
Every prime present in any number in A
Every element in B
A destination
Add directed edges with capacity 1 from source to elements in A, and from elements in B to destination.
Add directed edges with capacity 1 from each element x in A to every distinct prime in the prime factorisation of x.
Add directed edges with capacity 1 from each prime p to every element x in B where p divides x
Then solve for max flow from source to destination.
The numbers will have a small number of factors (at most 9 because 2.3.5.7.11.13.17.19.23.29 is bigger than 10**9), so you will have at most 1,800,000 edges in the middle.
This is much fewer than the 10,000,000,000 edges you could have had before (e.g. if all 100,000 entries in A and B were all even) so perhaps your max flow algorithm has a chance of meeting the time limit.
A large array array[n] of integers is given as input. Two index values are given - start,end. It is desired to find very quickly - min & max in the set [start,end] (inclusive) and max in the rest of array (excluding [start,end]).
eg-
array - 3 4 2 2 1 3 12 5 7 9 7 10 1 5 2 3 1 1
start,end - 2,7
min,max in [2,7] -- 1,12
max in rest - 10
I cannot think of anything better than linear. But this is not good enough as n is of order 10^5 and the number of such find operations is also of the same order.
Any help would be highly appreciated.
The way I understand your question is that you want to do some preprocessing on a fixed array that then makes your find max operation very fast.
This answers describes an approach that does O(nlogn) preprocessing work, followed by O(1) work for each query.
Preprocessing O(nlogn)
The idea is to prepare two 2d arrays BIG[a,k] and SMALL[a,k] where
1. BIG[a,k] is the max of the 2^k elements starting at a
2. SMALL[a,k] is the min of the 2^k elements starting at a
You can compute this arrays in a recursive way by starting at k==0 and then building up the value for each higher element by combining two previous elements together.
BIG[a,k] = max(BIG[a,k-1] , BIG[a+2^(k-1),k-1])
SMALL[a,k] = min(SMALL[a,k-1] , SMALL[a+2^(k-1),k-1])
Lookup O(1) per query
You are then able to instantly find the max and min for any range by combining 2 preprepared answers.
Suppose you want to find the max for elements from 100 to 133.
You already know the max of 32 elements 100 to 131 (in BIG[100,5]) and also the max of 32 elements from 102 to 133 (in BIG[102,5]) so you can find the largest of these to get the answer.
The same logic applies for the minimum. You can always find two overlapping prepared answers that will combine to give the answer you need.
You're asking for a data structure that will answer min and max queries for intervals on an array quickly.
You want to build two segment trees on your input array; one for answering interval minimum queries and one for answering interval maximum queries. This takes linear preprocessing, linear extra space, and allows queries to take logarithmic time.
I am afraid, that there is no faster way. Your data is completly random, and in that way, you have to go through every value.
Even sorting wont help you, because its at best O(n log n), so its slower. You cant use bisection method, because data are not sorted. If you start building data structures (like heap), it will again be O(n log n) at the best.
If the array is very large, then split it into partitions and use threads to do a linear check of each partition. Then do min/max with the results from the threads.
Searching for min and max in an unsorted array can only be optimized by taking two values at a time and comparing them to each other first:
register int min, max, i;
min = max = array[0] ;
for(i = 1; i + 1 < length; i += 2)
{
if(array[i] < array[i+1])
{
if(min > array[i]) min = array[i];
if(max < array[i+1]) max = array[i+1];
}
else
{
if(min > array[i+1]) min = array[i];
if(max < array[i]) max = array[i+1];
}
}
if(i < length)
if(min > array[i]) min = array[i];
else if(max < array[i]) max = array[i];
But I don't believe it's actually faster. Consider writing it in assembly.
EDIT:
When comparing strings, this algorithm could make the difference!
If you kinda know the min you can test from x to min if the value exists in the array. If you kinda know the max, you can test (backwards) from y to max, if the value exists in array, you found max.
For example, from your array, I will assume you have only positive integers.:
array - 3 4 2 2 1 3 12 5 7 9 7 10 1 5 2 3 1 1
You set x to be 0, test if 0 exists, doesn't, you then change it to 1, you find 1. there is your min.
You set y to be 15 (arbitrary large number): exists? no. set to 14. exists? no, set to 13. exists? no. set to 12. exists? yes! there is your max! I just made 4 comparisons.
If y exists from the first try, you might have tested a value INSIDE the array. So you test it again with y + length / 2. Assume you found the center of the array, so decal it a bit. If again you found the value from the first try, it might be within the array.
If you have negative and/or float values, this technique does not work :)
Of course it is not possible to have sub-linear algorithm (as far as I know) to search the way you want. However, you can achieve sub-linear time is some cases by storing fixed ranges of min-max and with some knowledge of the range you can improve search time.
e.g. if you know that 'most' of the time range of search will be say 10 then you can store min-max of 10/2 = 5 elements separately and index those ranges. During search you have to find the superset of ranges that can subsume search-range.
e.g. in the example
array - 3 4 2 2 1 3 12 5 7 9 7 10 1 5 2 3 1 1
start,end - 2,7
min,max in [2,7] -- 1,12
if you 'know' that most of the time search range would be 5 elements then, you can index the min-max beforehand like: since 5/2 = 2,
0-1 min-max (3,4)
2-3 min-max (2,2)
4-5 min-max (1,3)
6-7 min-max (5,12)
...
I think, this method will work better when ranges are large so that storing min-max avoids some searches.
To search min-max [2-7] you have to search the stored indexes like: 2/2 = 1 to 7/2 = 3,
then min of mins(2,1,5) will give you the minimum (1) and max of maxes (2,3,12) will give you the maximum(12). In case of overlap you will have to search only the corner indexes (linearly). Still it could avoid several searches I think.
It is possible that this algorithm is slower than linear search (because linear search has a very good locality of reference) so I would advise you to measure them first.
Linear is the best you can do, and its relatively easy to prove it.
Assume an infinite amount instantaneous memory storage and costless access, just so we can ignore them.
Furthermore, we'll assume away your task of finding min/max in a substring. We will think of them both as essentially the exact same mechanical problem. One just magically keeping track of the numbers smaller than other numbers in a comparison, and one magically keeping track of the numbers bigger than in a comparison. This action is assumed to be costless.
Lets then assume away the min/max of the sub-array problem, because its just the same problem as the min/max of any array, and we'll magically assume that it is solved and as part of our general action of finding the max in the bigger array. We can do this by assuming that the biggest number in the entire array is in fact the first number we look at by some magical fluke, and it is also the biggest number in the sub-array, and also happens to be the smallest number in the sub-array, but we just don't happen to know how lucky we are. How can we find out?
The least work we have to do is one comparison between it and every other number in the array to prove it is the biggest/smallest. This is the only action we are assuming has a cost.
How many comparisons do we have to do? We'll let N be the length of the array, and the total number of operations for any length N is N - 1. As we add elements to the array, the number of comparisons scales at the same rate even if all of our widely outrageous assumptions held true.
So we've arrived at the point where N is both the length of the array, and the determinant of the increasing cost of the best possible operation in our wildly unrealistic best case scenario.
Your operation scales with N in the best case scenario. I'm sorry.
/sorting the inputs must be more expensive than this minimal operation, so it would only be applicable if you were doing the operation multiple times, and had no way of storing the actual results, which doesn't seem likely, because 10^5 answers is not exactly taxing.
//multithreading and the like is all well and good too, just assume away any cost of doing so, and divide N by the number of threads. The best algorithm possible still scales linearly however.
///I'm guessing it would in fact have to be a particularly curious phenomenon for anything to ever scale better than linearly without assuming things about the data...stackoverflowers?
I want to generate all the Hamiltonian Cycles of a complete undirected graph (permutations of a set where loops and reverses count as duplicates, and are left out).
For example, permutations of {1,2,3} are
Standard Permutations:
1,2,3
1,3,2
2,1,3
2,3,1
3,1,2
3,2,1
What I want the program/algorithm to print for me:
1,2,3
Since 321 is just 123 backward, 312 is just 123 rotated one place, etc.
I see a lot of discussion on the number of these cycles a given set has, and algorithms to find if a graph has a Hamiltonian cycle or not, but nothing on how to enumerate them in a complete, undirected graph (i.e. a set of numbers that can be preceded or succeeded by any other number in the set).
I would really like an algorithm or C++ code to accomplish this task, or if you could direct me to where there is material on the topic. Thanks!
You can place some restrictions on the output to eliminate the unwanted permutations. Lets say we want to permute the numbers 1, ..., N. To avoid some special cases assume that N > 2.
To eliminate simple rotations we can require that the first place is 1. This is true, because an arbitrary permutation can always be rotated into this form.
To eliminate reverses we can require that the number at the second place must be smaller than the number at the last place. This is true, because from the two permutations starting with 1 that are reverses of each other, exactly one has this property.
So a very simple algorithm could enumerate all permutations and leave out the invalid ones. Of course there are optimisations possible. For example permutations that do not start with 1 can easily be avoided during the generation step.
An uber-lazy way to check if a path is the same one that starts in a different point in the cycle (IE, the same loop or the reverse of the same loop is this:
1: Decide that by convention all cycles will start from the lowest vertex number and continue in the direction of the lower of the two adjacent ordinals.
Hence, all of the above paths would be described in the same way.
The second, other useful bit of information here:
If you would like to check that two paths are the same you can concatenate one with its self and check if it contains either the second path or the reverse of the second path.
That is,
1 2 3 1 2 3
contains all of the above paths or their reverses. Since the process of finding all hamiltonian cycles seems much, much slower than the slight inefficiency this algorithm has, I felt I could throw it in :)