I have a problem where I must analyse 500C5 combinations (255244687600) of something. Distributing it over a 10-node cluster where each cluster processes roughly 10^6 combinations per second means the job will be complete in about seven hours.
The problem I have is distributing the 255244687600 combinations over the 10 nodes. I'd like to present each node with 25524468760, however the algorithms I'm using can only produce the combinations sequentially, I'd like to be able to pass the set of elements and a range of combination indicies, for example, [0-10^7), [10^7,2.0 10^7), etc. and have the nodes themselves figure out the combinations.
The algorithms I'm using at the moment are from the following:
http://howardhinnant.github.io/combinations.html
Stack Overflow question Efficiently computing vector combinations
I've considered using a master node, that enumerates each of the combinations and sends work to each of the nodes. However, the overhead incurred in iterating the combinations from a single node and communicating back and forth work is enormous, and it will subsequently lead to the master node becoming the bottleneck.
Is there any good combination iterating algorithms geared up for efficient/optimal distributed enumeration?
You may have some success with combinatorial numbers, which allow you to retrieve the N'th (n/10th) k-combination with a simple algorithm; then run the next_combination algorithm n/10 times on each of the ten nodes to iterate.
Sample code (in C#, but quite readable for a C++ programmer) can be found on MSDN.
Have node number n process every tenth combination, starting from the nth.
I know this question is old, but here is how it may be done efficiently.
All code currently in Python which I'm sure will be easy enough to translate to C++.- You will probably want to move from using an integer for the characteristic vector to using a bit array, since the integers used will need 500 bits (not a problem in Python)- Feel free to update to C++ anyone.
Distribute to the nodes their range of combinations (start number and length to process), the set of items from which combinations are to be chosen, and the number, k, of items to choose.
Initialise each node by having it find its starting combination directly from start and items.
Run each node by having it do the work for the first combination then iterate through the rest of its combinations, and do the associated work.
To perform 1 do as you suggest find n-choose-k and divide it into ranges - in your case 500-Choose-5 is, as you said, 255244687600 so, for node=0 to 9 you distribute:(start=node*25524468760, length=25524468760, items=items, k=5)
To perform 2 you can find the starting combination directly (without iteration) using the combinatorial number system and find the integer representation of the combination's characteristic vector (to be used for the iteration in 3) at the same time:
def getCombination(index, items, k):
'''Returns (combination, characteristicVector)
combination - The single combination, of k elements of items, that would be at
the provided index if all possible combinations had each been sorted in
descending order (as defined by the order of items) and then placed in a
sorted list.
characteristicVector - an integer with chosen item's bits set.
'''
combination = []
characteristicVector = 0
n = len(items)
nCk = 1
for nMinusI, iPlus1 in zip(range(n, n - k, -1), range(1, k + 1)):
nCk *= nMinusI
nCk //= iPlus1
curIndex = nCk
for k in range(k, 0, -1):
nCk *= k
nCk //= n
while curIndex - nCk > index:
curIndex -= nCk
nCk *= (n - k)
nCk -= nCk % k
n -= 1
nCk //= n
n -= 1
combination .append(items[n])
characteristicVector += 1 << n
return combination, characteristicVector
The integer representation of the characteristic vector has k bits set in the positions of the items that make up the combination.
To perform 3 you can use Gosper's hack to iterate to the next characteristic vector for the combination in the same number system (the next combination that would appear in a sorted list of reverse sorted combinations relative to items) and, at the same time, create the combination:
def nextCombination(items, characteristicVector):
'''Returns the next (combination, characteristicVector).
combination - The next combination of items that would appear after the
combination defined by the provided characteristic vector if all possible
combinations had each been sorted in descending order (as defined by the order
of items) and then placed in a sorted list.
characteristicVector - an integer with chosen item's bits set.
'''
u = characteristicVector & -characteristicVector
v = u + characteristicVector
if v <= 0:
raise OverflowError("Ran out of integers") # <- ready for C++
characteristicVector = v + (((v ^ characteristicVector) // u) >> 2)
combination = []
copiedVector = characteristicVector
index = len(items) - 1
while copiedVector > 0:
present, copiedVector = divmod(copiedVector, 1 << index)
if present:
combination.append(items[index])
index -= 1
return combination, characteristicVector
Repeat this length-1 times (since you already found the first one directly).
For example:
Five nodes processing 7-choose-3 letters:
>>> items = ('A','B','C','D','E','F','G')
>>> k = 3
>>> nodes = 5
>>> n = len(items)
>>> for nmip1, i in zip(range(n - 1, n - k, -1), range(2, k + 1)):
... n = n * nmip1 // i
...
>>> for node in range(nodes):
... length = n // nodes
... start = node * length
... print("Node {0} initialised".format(node))
... combination, cv = getCombination(start, items, k)
... doWork(combination)
... for i in range(length-1):
... combination, cv = nextCombination(items, cv)
... doWork(combination)
...
Node 0 initialised
Doing work with: C B A
Doing work with: D B A
Doing work with: D C A
Doing work with: D C B
Doing work with: E B A
Doing work with: E C A
Doing work with: E C B
Node 1 initialised
Doing work with: E D A
Doing work with: E D B
Doing work with: E D C
Doing work with: F B A
Doing work with: F C A
Doing work with: F C B
Doing work with: F D A
Node 2 initialised
Doing work with: F D B
Doing work with: F D C
Doing work with: F E A
Doing work with: F E B
Doing work with: F E C
Doing work with: F E D
Doing work with: G B A
Node 3 initialised
Doing work with: G C A
Doing work with: G C B
Doing work with: G D A
Doing work with: G D B
Doing work with: G D C
Doing work with: G E A
Doing work with: G E B
Node 4 initialised
Doing work with: G E C
Doing work with: G E D
Doing work with: G F A
Doing work with: G F B
Doing work with: G F C
Doing work with: G F D
Doing work with: G F E
>>>
Related
You are given a permutation p1,p2,...,pn of numbers from 1 to n.
A permutation is a sequence of integers from 1 to n of length n containing each number exactly once.
You are given q queries where each query consists of two integers a and b, In response to each query you need to return a number of inversions of permutation after swapping elements at index a and b, Here every query is independent i.e. after each query the permutation is restored to its initial state.
An inversion in a permutation p is a pair of indices (i, j) such that i > j and pi < pj. For example, a permutation [4, 1, 3, 2] contains 4 inversions: (2, 1), (3, 1), (4, 1), (4, 3).
Input: The first line contains n,q.
The second line contains the space-separated permutation p1,p2,...,pn.
Each line of the next q lines contains two integers a,b.
Output: For each query Print an integer denoting the number of Inversion on a new line.
Sample input:
5 5
1 2 3 4 5
1 2
1 3
2 5
2 4
3 3
Output:
1
3
5
3
0
Constraints:
2<=n<=1000
1<=q<=200000
My approach: I am counting no of inversions using BIT (https://www.geeksforgeeks.org/count-inversions-array-set-3-using-bit/) for each query after swapping elements at position a and b..and then again swapping it so that my array remains unchanged. But this solution gives TLE for large test cases. Is there any better approach for this problem?
You are getting TLE probably because number of computations in this approach is q * (n * log(n)) = 2 * 10^5 * 10^3 * log(1000) = ~10^9, which is more than generally accepted computations ~10^8.
I can think of the following solution. Please note that I have not coded / verified it:
Denoting ri == number of indices j, such that i > j && pi < pj. Eg: [2, 3, 1, 4], r3 = 2. Basically, it means the number of inversions with the farther index as i. (Please note that I am using 1-based index as per the question. Also,a < b as per the question)
Thus we have: Sum of ri == #invs (number of inversions)
We can calculate initial total #invs in O(n^2)
When a and b are swapped, we can observe that:
a) ri remains constant, where i < a .
b) ri remains constant, where i > b.
Only ri changes where a <= i <=b, and that too on these following conditions. I am considering the case when pa < pb. Exact opposite case will need to considered when pa > pb.
a) Since pa < pb, thus this swap causes #invs = #invs + 1
b) If (pi < pa && pi < pb) || (pi > pa && pi > pb), this swap does not change ri. Eg: [2,....10,....5]. Here Swapping 2 and 5 does not change the r value for 10.
c) If pa < pi < pb, it will increment ri by 1, and new rb by 1. Eg: [2,....3,.....4], when 2 and 4 are swapped, we have [4,....3,....2], the rvalue 3 increases by 1 (because of 4); and also the r value of 2 increase by 1 (because of 3). Please note that increment because of what about 4 > 2? was already calculated in step (a), and needs to be done once only.
d) We need to find all such indicies i where pa < pi < pb as we started with above. Let us call it f(a, b). Then the total change in #invs delta = (2 * f(a, b)) + 1, and answer will be #original_invs + delta.
As I mentioned, all the exact opposite steps need to be done for the case pa > pb. The delta will be negative in that case.
Now, the only thing remained is to solve: Given a, b, find f(a, b) efficiently. For this, we can pre-process and store it for all pairs of indices. This will take O(N^2) space, and O(N^2 * log(N)) time, using a balanced binary-search-tree (BST). Again showing steps for pre-processing for case pa < pb only. Another set of pre-processing steps needs to be done for the other case:
We will use self-balancing BST, in which each node also contains the following fields:
a) field_1: This denotes the size of the left sub-tree. This value will be updated on every insert operation, if size of left-sub-tree changes.
b) field_2: This denotes the number of elements < node.value that this tree has. This value is initialized once when the node is inserted and does not change thereafter. I have added a small explanation of how it will be achieved in Addendum-A. This field is basically our pre-processing, which will determine f(a, b).
With all of this now, for each index i, where 0 <= i < n, do the following: Create new tree. Insert pj values into the tree one by one, where (i < j < n ) && (pa < pj) . (Please note we are not inserting values where pa > pj). The method given in Addendum-A will make sure we find f(i, j) while inserting.
There will be n such pre-processed trees, one for every index. For finding f(a, b): We need to look into ath tree, and search node.value = pb. This node's field_2 = f(a, b).
The complexity of insertion is O(logN). So, the total pre-processing computation = O(N * N(logN)). Search is O(logN), so the query complexity is O(q * logN). Total complexity = O(N^2) + O(N * N (logN)) + O(q * logN) which will turn out ~10^7
==============================================================================
Addendum A: How to populate field_2 while inserting node:
i) Insert the node, and balance the tree. Update field_1 as required.
i) Initailze ans = 0. Traverse the BST from root searching for your node.
iii) do {
If node.value < search_key_b, ans += node.left_subtree_size + 1
} while(!node.found)
iv) ans -= 1
We can solve this in O(n log n) space and O(n log n + Q * log^2(n)) time with a merge-sort tree. The merge-sort tree allows us to find the number of elements inside a subarray that are greater than or lower than an input number in O(log^2(n)) time and O(n log n) space.
First we record the total number of inversions in O(n log n) time, for which there are known methods. To query the effect of a swap bound by left and right, consider the subarray between:
subtract the number of elements greater
than right in the subarray (those will
no longer be inversions)
subtract the number of elements smaller
than left in the subarray (those will
no longer be inversions)
add the number of elements greater
than left in the subarray (those will
be new inversions)
add the number of elements smaller
than right in the subarray (those will
be new inversions)
if right > left, add 1
if left > right, subtract 1
I'm doing a problem that says that we have to get from one number, n, to another, m, in as few steps as possible, where each "step" can be 1) doubling, or 2) subtracting one. The natural approach is two construct a binary tree and run BFS since we are given that n, m are bounded by 0 ≤ n, m ≤ 104 and so the tree doesn't get that big. However, I ran into a stunningly short solution, and have no idea why it works. It basically goes from m … n instead, halving or adding one as necessary to decrease m until it is less than n, and then just adding to get up to n. Here is the code:
while(n<m){
if (m%2) m++;
else m /= 2;
count++;
}
count = count + n - m;
return count;
Is it obvious why this is necessarily the shortest path? I get that going from m … n is natural because n is lower bounded by zero and so the tree becomes more "finite" in some sense, but this method of modified halving until you get below the number, then adding up until you reach it, doesn't seem like it should necessarily always return the correct answer, yet it does. Why, and how might I have recognized this approach from the get-go?
You only have 2 available operations:
double n
subtract 1 from n
That means the only way to go up is to double and the only way to go down is to subtract 1.
If m is an even number, then you can land on it by doubling n when 2*n = m. Otherwise, you will have to subtract 1 as well (if 2*n = m + 1 then you will have to double n and then subtract 1).
If doubling n lands too far above m then you will have to subtract twice as many times than if you used the subtraction before doubling n.
example:
n = 12 and m = 20.
You can either double n and then subtract 4 times as in 12*2 -4 = 20. - 5 steps
Or you can subtract twice and then double n as in (12-2)*2 = 20. - 3 steps
You might be wondering 'How should I pick between doubling or subtracting when n < m/2?'.
The idea is to use a reccurence-based approach. You know that you want n to reach a value of v such as v = m/2 or v = (m+1)/2. In other words you want n to reach v... and the shortest way to do that is to reach a value v' such as v' = v/2 or v' = (v+1)/2 and so on.
example:
n = 2 and m = 21.
You want n to reach (21+1)/2 = 11 which means you want to reach (11+1)/2 = 6 and thus to reach 6/2=3 and thus to reach (3+1)/2 = 2.
Since n=2 you now know that the shortest path is: (((n*2-1)*2)*2-1)*2-1.
other example:
n = 14 and m = 22.
You want n to reach 22/2 = 11.
n is already above 11 so the shortest path is : (n-1-1-1)*2.
From here, you can see that the shortest path can be deduced without a binary tree.
On top of that, you have to think starting from m and going down to an obvious path for n. This implies that it will be easier to code an algorithm going from m to n than the opposite.
Using recurrence, this function achieves the same result:
function shortest(n, m) {
if (n >= m) return n-m; //only way to go down
if(m%2==0) return 1 + shortest(n, m/2); //if m is even => optimum goal is m/2
else return 2 + shortest(n, (m+1)/2);//else optimum goal is (m+1)/2 which necessitates 2 operations
}
I have an array of integers and need to apply a variant of the subset sum algorithm on it, except that instead of finding a set of integers whose sum is 0 I am trying to find a set of integers whose sum is n. I am unclear as to how to adapt one of the standard subset sum algorithms to this variant and was hoping for any insight into the problem.
This is subset sum problem, which is NP-Complete (there is no known efficient solution to NP-Complete problems), but if your numbers are relatively small integers - there is an efficient pseudo polynomial solution to it that follows the recurrence:
D(x,i) = false x<0
D(0,i) = true
D(x,0) = false x != 0
D(x,i) = D(x,i-1) OR D(x-arr[i],i-1)
Later, you need to step back on your choices, see where you decided to "reduce" (take the element), and where you decided not to "reduce" (not take the element), on the generated matrix.
This thread and this thread discuss how to get the elements for similar problems.
Here is a python code (taken from the thread I linked to) that does the trick.
If you are not familiar with python - read it as pseudo code, it's pretty easy to understand python!.
arr = [1,2,4,5]
n = len(arr)
SUM = 6
#pre processing:
D = [[True] * (n+1)]
for x in range(1,SUM+1):
D.append([False]*(n+1))
#DP solution to populate D:
for x in range(1,SUM+1):
for i in range(1,n+1):
D[x][i] = D[x][i-1]
if x >= arr[i-1]:
D[x][i] = D[x][i] or D[x-arr[i-1]][i-1]
print D
#get a random solution:
if D[SUM][n] == False:
print 'no solution'
else:
sol = []
x = SUM
i = n
while x != 0:
possibleVals = []
if D[x][i-1] == True:
possibleVals.append(x)
if x >= arr[i-1] and D[x-arr[i-1]][i-1] == True:
possibleVals.append(x-arr[i-1])
#by here possibleVals contains 1/2 solutions, depending on how many choices we have.
#chose randomly one of them
from random import randint
r = possibleVals[randint(0,len(possibleVals)-1)]
#if decided to add element:
if r != x:
sol.append(x-r)
#modify i and x accordingly
x = r
i = i-1
print sol
You can solve this by using dynamic programming.
Lets assume that:
N - is the sum that required (your first input).
M - is the number of summands available (your second input).
a1...aM - are the summands available.
f[x] is true when you can reach the sum of x, and false otherwise
Now the solution:
Initially f[0] = true and f[1..N] = false - we can reach only the sum of zero without taking any summand.
Now you can iterate over all ai, where i in [1..M], and with each of them perform next operation:
f[x + ai] = f[x + ai] || f[x], for each x in [M..ai] - the order of processing is relevant!
Finally you output f[N].
This solution has the complexity of O(N*M), so it is not very useful when you either have large input numbers or large number of summands.
I was trying to solve this problem from hacker rank I tried the brute fore solution but it doesnt seem to work. Can some one gimme an idea to solve this problem efficiently.
https://www.hackerrank.com/contests/sep13/challenges/sherlock-puzzle
Given a binary string (S) which contains ‘0’s and ‘1’s and an integer K,
find the length (L) of the longest contiguous subsequence of (S * K) such that twice the number of zeroes is <= thrice the number of ones (2 * #0s <= 3 * #1s) in that sequence.
S * K is defined as follows: S * 1 = S
S * K = S + S * (K - 1)
Input Format
The first (and only) line contains an integer K and the binary string S separated by a single space.
Constraints
1 <= |S| <= 1,000,000
1 <= K <= 1,000,000
Output Format
A single integer L - the answer to the test case
Here's a hint:
Let's first suppose K = 1 and that S looks like (using a dot for 0):
..1...11...11.....111111....111....
e f b a c d
The key is to note that if the longest acceptable sequence contains a 1 it will also contain any adjacent ones. For example, if the longest sequence contains the 1 at a, it will also contain all of the ones between b and c (inclusive).
So you only have to analyze the sequence at the points where the blocks of ones are.
The main question is: if you start at a certain block of ones, can you make it to the next block of ones? For instance, if you start at e you can make it to the block at f but not to b. If you start at b you can make it to the block at d, etc.
Then generalize the analysis for K > 1.
Brute force obviously won't work since it's O((n * k) ** 2). I will use python style list comprehensions in this answer. You'll need an array t = [3 if el == "1" else - 2 for el in S]. Now if you use the p[i] = t[0] + ... + t[i] array you can see that in the k == 1 case you are basically looking for a pair (i, j), i < j such that p[j] - (p[i - 1] if i != 0 else 0) >= 0 is true and j - i is maximal among
these pairs. Now for each i in 0..n-1 you have to find find it's j pair such that the above is maximal. This can be done in O(log n) for a specific i so this gives and O(n log n) solution for the k == 1 case. This can be extended to an O(n log n) solution for the general case(there is a trick to find the largest block that can be covered). Also there is an O(n) solution to this problem but you need to further examine the p sequence for that. I don't suggest to write a solution in a scripting language though. Even the O(n) solution times out in python...
i m a newbie to haskell, currently i need a function 'f' which, given two integers, returns the number of prime numbers in between them (i.e., greater than the first integer but smaller than the second).
Main> f 2 4
1
Main> f 2 10
3
here is my code so far, but it dosent work. any suggestions? thanks..
f :: Int -> Int -> Int
f x y
| x < y = length [ n | n <- [x..y], y 'mod' n == 0]
| otherwise = 0
Judging from your example, you want the number of primes in the open interval (x,y), which in Haskell is denoted [x+1 .. y-1].
Your primality testing is flawed; you're testing for factors of y.
To use a function name as an infix operator, use backticks (`), not single quotes (').
Try this instead:
-- note: no need for the otherwise, since [x..y] == [] if x>y
nPrimes a b = length $ filter isPrime [a+1 .. b-1]
Exercise for the reader: implement isPrime. Note that it only takes one argument.
Look at what your list comprehension does.
n <- [x..y]
Draw n from a list ranging from x to y.
y `mod` n == 0
Only select those n which evenly divide y.
length (...)
Find how many such n there are.
What your code currently does is find out how many of the numbers between x and y (inclusive) are factors of y. So if you do f 2 4, the list will be [2, 4] (the numbers that evenly divide 4), and the length of that is 2. If you do f 2 10, the list will be `[2, 5, 10] (the numbers that evenly divide 10), and the length of that is 3.
It is important to try to understand for yourself why your code doesn't work. In this case, it's simply the wrong algorithm. For algorithms that find whether a number is prime, among many other sources, you can check the wikipedia article: Primality test.
I you want to work with large intervals, then it might be a better idea to compute a list of primes once (instead of doing a isPrime test for every number):
primes = -- A list with all prime numbers
candidates = [a+1 .. b-1]
myprimes = intersectSortedLists candidates primes
nPrimes = length $ myprimes