The other day I encountered a problem related with queries, but I can't solve it.
Given an array with N integers and a positive integer M, you must answer Q queries. Each query is characterized as ( i , j ), where i and j are each indices of the array. In each query you must answer how many pairs ( r , s ) exist such that
i <= r <= s <= j
the sum of the array elements with indices in [ r , s ] is divisible by M.
Limits:
N <= 50,000
Q <= 50,000
M <= 100
I have a dynamic programming solution that preprocesses every query ( r , s ) in O( N^2 ), but that is not fast enough. Is there a more efficient solution? I have some ideas with Mo's algorithm, or with segment trees, but I can't get it.
Calculate the prefix sums of the original array (assuming it's 1-based) for every i = 1..N.
The equivalence of Sum[r] and Sum[s] for any two indices r and s where r < s means that the sum of the array elements with indices in [r+1, s] is divisible by M (and we need to calculate the number of such equivalences within interval). The time complexity of this step is O(N).
Precalculate the array Count for every i = 1..N, j = 0..M-1:
Count[i][j] stores the number of times that Sum[len] (where len <= i) was equal to j. Time complexity of this step is O(N*M).
For every query (i, j) the answer will be equal to:
For every possible value of the remainder k we find D(k) - the number of times that Sum[len] is equal to k within interval [i, j]. Then we add to the result the number of all possible pairs of D(k) interval boundaries that is D(k)*(D(k)-1)/2. Time complexity: O(M) for every query.
Complexity: O(N) + O(N*M) + O(Q*M) = O((Q+N)*M), that would be ok for given constraints.
First note that for any subarray (r, s) that sums to a multiple of M:
sum(r, s) == sum(i, s) - sum(i, r - 1)
== (qa * M + ra) - (qb * M + rb)
where ra and rb are both less than M and greater than or equal to 0 (i.e. the respective remainders after dividing by M).
Now sum(r, s) is divisible by M so it's remainder is 0 after dividing by M. Therefore:
ra == rb
If we calculate all the remainders after dividing the sums the subarrays (i, i), (i, i + 1), ... ,(i, j) by M as r1, r2, ... , rj then store the count of all these in an array R of size M so that R[k] is the number of remainders equal to k, then:
R[0] == the number of subarrays starting at i that are divisible by M
and for every k >= 0 and k < M such that R[k] > 1 we can count R[k] choose 2:
(R[k] * (R[k] - 1)) / 2
subarrays not starting at i that are divisible by M.
Creating and summing all these values gives us the answer in O( N + M ) for each (r, s) query.
Related
I attempted this SPOJ problem.
Problem:
AMR10J - Mixing Chemicals
There are N bottles each having a different chemical. For each chemical i, you have determined C[i] which means that mixing chemicals i and C[i] causes an explosion. You have K distinct boxes. In how many ways can you divide the N chemicals into those boxes such that no two chemicals in the same box can cause an explosion together?
INPUT
The first line of input is the number of test cases T. T test cases follow each containing 2 lines.
The first line of each test case contains 2 integers N and K.
The second line of each test case contains N integers, the ith integer denoting the value C[i]. The chemicals are numbered from 0 to N-1.
OUTPUT
For each testcase, output the number of ways modulo 1,000,000,007.
CONSTRAINTS
T <= 50
2 <= N <= 100
2 <= K <= 1000
0 <= C[i] < N
For all i, i != C[i]
SAMPLE INPUT
3
3 3
1 2 0
4 3
1 2 0 0
3 2
1 2 0
SAMPLE OUTPUT
6
12
0
EXPLANATION
In the first test case, we cannot mix any 2 chemicals. Hence, each of the 3 boxes must contain 1 chemical, which leads to 6 ways in total.
In the third test case, we cannot put the 3 chemicals in the 2 boxes satisfying all the 3 conditions.
The summary of the problem, given a set of chemicals and a set of boxes, count how many possible ways to place these chemicals in boxes such that no chemicals will explode.
At first I used brute force method to solve the problem, I recursively place chemicals in boxes and count valid configurations, I got TLE at my first attempt.
Later I learned that the problem can be solved with graph colouring.
I can represent chemicals as vertexes and there'a an edge between chemicals if they cannot be placed each other.
And the set of boxes can be used as vertex colours, all I need to do was to count how many different valid colourings of the graph.
I applyed this concept to solve the problem unfortunately I got TLE again. I don't know how to improve my code, I need help.
code:
#include <bits/stdc++.h>
#define MAXN 100
using namespace std;
const int mod = (int) 1e9 + 7;
int n;
int k;
int ways;
void greedy_coloring(vector<int> adj[], int color[])
{
int u = 0;
for (; u < n; ++u)
if (color[u] == -1)//found first uncolored vertex
break;
if (u == n)//no uncolored vertexex means all vertexes are colored
{
ways = (ways + 1) % mod;
return;
}
bool available[k];
memset(available, true, sizeof(available));
for (int v : adj[u])
if (color[v] != -1)//if the adjacent vertex colored, make its color unavailable
available[color[v]] = false;
for (int c = 0; c < k; ++c)
if (available[c])
{
color[u] = c;
greedy_coloring(adj, color);
color[u] = -1;//don't forgot to reset the color
}
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int T;
cin >> T;
while (T--)
{
cin >> n >> k;
vector<int> adj[n];
int c[n];
for (int i = 0; i < n; ++i)
{
cin >> c[i];
adj[i].push_back(c[i]);
adj[c[i]].push_back(i);
}
ways = 0;
int color[n];
memset(color, -1, sizeof(color));
greedy_coloring(adj, color);
cout << ways << "\n";
}
return 0;
}
Counting the number of colorings in a general graph is #P-hard, but this graph has some special structure, which I'll exploit in a minute after I enumerate some basic properties of counting colorings. The first observation is that, if the graph has a node with no neighbors, if we delete that node, the number of colorings decreases by a factor of k. The second observation is that, if a node has exactly one neighbor and we delete it, the number of colorings decreases by a factor of k-1. The third is that the number of colorings is equal to the product of the number of colorings for each connected component. The fourth is that we can delete all but one parallel edge.
Using these properties, it suffices to determine a formula for each connected component of the 2-core of this graph, which is a simple cycle of some length. Let P(n) and C(n) be the number of ways to color a path or cycle respectively with n nodes. We use the basic properties above to find
P(n) = k (k-1)^(n-1).
Finding a formula for C(n) I think requires the deletion contraction formula, which leads to a recurrence
C(3) = k (k-1) (k-2), i.e., three nodes of different colors;
C(n) = P(n) - C(n-1) = k (k-1)^(n-1) - C(n-1).
Multiply the above recurrence by (-1)^n.
(-1)^3 C(3) = -k (k-1) (k-2)
(-1)^n C(n) = (-1)^n k (k-1)^(n-1) - (-1)^n C(n-1)
= (-1)^n k (k-1)^(n-1) + (-1)^(n-1) C(n-1)
(-1)^n C(n) - (-1)^(n-1) C(n-1) = (-1)^n k (k-1)^(n-1)
Let D(n) = (-1)^n C(n).
D(3) = -k (k-1) (k-2)
D(n) - D(n-1) = (-1)^n k (k-1)^(n-1)
Now we can write D(n) as a telescoping sum:
D(n) = [sum_{i=4}^n (D(n) - D(n-1))] + D(3)
D(n) = [sum_{i=4}^n (-1)^n k (k-1)^(n-1)] - k (k-1) (k-2).
Break it down as two geometric sums which then cancel nicely.
D(n) = [sum_{i=4}^n (-1)^n ((k-1) + 1) (k-1)^(n-1)] - k (k-1) (k-2)
= sum_{i=4}^n (1-k)^n - sum_{i=4}^n (1-k)^(n-1) - k (k-1) (k-2)
= (1-k)^n - (1-k)^3 - k (k-1) (k-2)
= (1-k)^n - (1 - 3k + 3k^2 - k^3) - (2k - 3k^2 + k^3)
= (1-k)^n - (1-k)
C(n) = (-1)^n (1-k)^n - (-1)^n (1-k)
= (k-1)^n + (-1)^n (k-1).
Note that after removing all parallel edges, we can have at most n edges. This means that in any one connected component we can only see one cycle (and simple at that), which makes the combinatorics rather straightforward. (Cycles are only dependent on how many edges each node can spawn, which is capped at 1.)
Second example:
k = 3
<< 0 <-- 3
/ ^
/ ^
1 --> 2
Since cycles are self contained, any connection to one removes the possibility of another. In the example above, we cannot make a second cycle involving node 3 by adding more nodes, and the same issue would extend to any subsequent connected nodes.
It should be enough, therefore, to perform a search, separating out connected components and marking their node count and whether they contain a cycle. Given a connected component, where c of the nodes are part of a cycle and m nodes are not, we have the following formula (David Eisenstat helped me correct my combinatoric for the count of colourings of a cycle):
if the component has a cycle:
[(k - 1)^c + (-1)^c * (k - 1)] *
(k - 1)^(m)
otherwise:
k * (k - 1)^(m - 1)
As David Eisenstat noted, multiply all these results for the final tally.
Problem : Find
Range of n : 1<= n <=
The main challenge is handling queries(Q) which can be large . 1 <= Q <=
Methods I have used so far :
Brute Force
while(Q--)
{
int N;
cin>>N;
for(int i=1;i<=N;i++)
ans += lcm(i,N)/i ;
}
Complexity :
Preprocessing and Handling queries in
First I build a table which holds the value of euler totient function for every N.
This can be done in O(N).
void sieve()
{
// phi table holds euler totient function value
// lp holds the lowest prime factor for a number
// pr is a vector which contains prime numbers
phi[1]=1;
for(int i=2;i<=MAX;i++)
{
if(lp[i]==0)
{
lp[i]=i;
phi[i]=i-1;
pr.push_back(i);
}
else
{
if(lp[i]==lp[i/lp[i]])
phi[i] = phi[i/lp[i]]*lp[i];
else phi[i] = phi[i/lp[i]]*(lp[i]-1);
}
for(int j=0;j<(int)pr.size()&&pr[j]<=lp[i]&&i*pr[j]<=MAX;j++)
lp[i*pr[j]] = pr[j];
}
For each query factorize N and add d*phi[d] to the result.
for(int i=1;i*i<=n;i++)
{
if(n%i==0)
{
// i is a factor
sum += (n/i)*phi[n/i];
if(i*i!=n)
{
// n/i is a factor too
sum += i*phi[i];
}
}
}
This takes O(sqrt(N)) .
Complexity : O(Q*sqrt(N))
Handling queries in O(1)
To the sieve method I described above I add a part which calculates the answer we need in O(NLogN)
for(int i=1;i<=MAX;++i)
{
//MAX is 10^7
for(int j=i;j<=MAX;j+=i)
{
ans[j] += i*phi[i];
}
}
This unfortunately times out for the given constraints and the time limit (1 second).
I think this involves some clever idea regarding the prime factorization of N .
I can prime factorize a number in O(LogN) using the lp(lowest prime) table built above but I cant figure out how to arrive at the answer using the factorization.
You can try following algorithm:
lcm(i,n) / i = i * n / i * gcd(i, n) = n / gcd(i, n)
Now should find sum of numbers n / gcd(i, n).
Lets n = p1^i1 * p2^i2 * p3^j3 where number p1, p2, ... pk is prime.
Number of items n / gdc(i, n) where gcd(i , n) == 1 is phi[n] = n*(p1-1)*(p2-1)*...*(pk-1)/(p1*p2*...*pk), so add to sum n*phi[n].
Number of items n / gdc(i, n) where gcd(i , n) == p1 is phi[n/p1] = (n/p1)*(p1-1)*(p2-1)*...*(pk-1)/(p1*p2*...*pk), so add to sum n/p1*phi[n/p1].
Number of items n / gdc(i, n) where gcd(i , n) == p1*p2 is phi[n/(p1*p2)] = (n/(p1*p2))*(p1-1)*(p2-1)*...*(pk-1)/(p1*p2*...*pk), so add to sum n/(p1*p2)*phi[n/(p1*p2)].
Now answer is the sum
n/(p1^j1*p2^j2*...*pk^jk) phi[n/(p1^j1*p2^j2*...*pk^jk)]
over all
j1=0,...,i1
j2=0,...,i2
....
jk=0,...,ik
Total number of items in this sum is i1*i2*...*ik that is significantly less then O(n).
To calculate this sum you can use a recursion function with free argument initial number, current representation and initial representation:
initial = {p1:i1, p2:i2, ... ,pn:in}
current = {p1:i1, p2:i2, ... ,pn:in}
visited = {}
int calc(n, initial, current, visited):
if(current in visited):
return 0
visited add current
int sum = 0
for pj in keys of current:
if current[pj] == 0:
continue
current[pj]--
sum += calc(n, initial, current)
current[pj]++
mult1 = n
for pj in keys of current:
mult1 /= pj^current[pj]
mult2 = mult1
for pj in keys of current:
if initial[pj] == current[pj]:
continue
mult2 = mult2*(pj -1)/pj
sum += milt1 * mult2
return sum
Its possible to quickly determine the sum if you know the prime factorization of the number N. Working off the same approach (totient function times N divided by a factor) as the existing answer, but applying some algebra to simplify terms, factor the expression to sums of prime powers, substituting the formula for a geometric series... we arrive at a much simpler solution.
Given the prime factorization of N in primes ps to powers qs, we can compute the result of the original equation for N via:
result = 1
for p, q in prime_factors
result *= p * (p-1) * (p**(2*q) - 1) / (p**2 - 1) + 1
Note that ** denotes exponentiation in the above pseudo-code.
If one sieves for primes up to MAX, storing at least one prime divisor for each composite discovered (as mentioned in the original problem) as precomputation, its possible to then factor the subsequent N values in log(N) time by referencing the factor table. If one also pre-computes a prime power table, the above algorithm can then run in log(N) time, for an overall complexity of O(MAX*log(MAX)) pre-computing time and O(Q*log(MAX)) query time, and O(MAX) space.
I have this c++ like pseudo code here:
for ( i = 1; i ≤ (n – 2); i++)
for (j = i + 1; j ≤ (n – 1); j ++)
for (k = j + 1; k ≤ n; k++)
Print “Hello World”;
I am fairly certain the time complexity of this particular block of code is O(n^3) because it is triple nested for loop and they are all going to at minimum n - 2 so I generalized (n-2) * (n-1) * n
But I have been trying to solve the actual time complexity function. This is how far I got and could not proceed any further:
summation from i = 1 to n-2, summation from j = (i+1) to n-1, summation from k = (j+1) to n.
I understand that the inner most loop performs n - (j+1) steps, the middle loop performs (n-1)-(i+1) steps, and the outer loop performs (n-2)-i steps. I just need some pointers on how to simplify the summations to come to a time complexity function.
Thank you!
If interested, the loops iterate through every combination of n things taken 3 at a time, starting with (1,2,3), (1,2,4), ... , and ending with (n-2,n-1,n), which is n! / (( 3! )( (n-3)!) ) = (n)(n-1)(n-2)/6 = (n^3 - 3n^2 + 2n) / 6 , which leads to O(n^3).
Don't run the loop from 1 to less or equal a value. Your code is equal to:
for ( i = 0; i < (n – 2); i++)
for (j = i; j < (n – 1); j ++)
for (k = j; k < n; k++)
Print “Hello World”;
So your inner loop runs n-j, the middle one multiplies it with n-1-i and the outer one multiplies it with n-2. So you get (n-j)*(n-1-i)*(n-2). n has O(n) complexity. Because of i runs from 0 to (n-1), you could replace it with O(n) (because sum(0, n) = 0 + 1 + .. + N = 0.5 * n^2 = O(n^2)). It is the same with j. So you get (O(n)-O(n))*(O(n)-1-O(n))*(O(n)-2) = O(n)*(n)*O(n) = O(n^3).
For details why you could replace i with O(n) see "Nested loops" at this.
std::sort performs approximately N*log2(N) (where N is distance) comparisons of elements(source - http://www.cplusplus.com/), so its complexity is N*log2(N).
Please, help me to calculate complexity for the next code:
void func(std::vector<float> & Storage)
{
for(int i = 0; i < Storage.size() - 1; ++i)
{
std::sort(Storage.begin()+i, Storage.end());
Storage[i+1] += Storage[i];
}
}
complexity = N^2*log2(N) or 2log2(2)+3log2(3)+...+(N)log2(N)?
Thank you.
The proper way to compute the complexity is to evaluate the complexity of repeated O(K Log K) problems of linearly increasing sizes K = 1 ... N. This can be done either by computing the sum, or by just computing the integral
Integrate[K Log[K], {K, 0, N}]
with e.g. Mathematica, and you get
1/4 N^2 (-1 + 2 Log[N])
which is of O(N^2 Log N).
Even though for polynomial and logarithmic functions it holds true, in general it is not true that the integral of K = 1 ... N subproblems of complexity f(K) is equal to N f(N). E.g. the sum of K = 1 ... N subproblems of complexity Exp[K] is simply Exp[N], not N Exp[N].
I would agree with N^2*log2(N) as the sort algorithm is run N times. In Big-O, where c is a constant:
c*N * N*log2(N) => O(N^2*log2(N))
It will be asymptotically O((N^2)*(log2(N))
we need sum of k*log2(k) k from 1 to N
You are summing up logarithmic functions:
complexity <- 0
for i = 1..N
complexity += i Log(i)
Resulting in the summation:
Log(1) + 2 Log(2) + ... + N Log(N)
from http://en.wikipedia.org/wiki/Logarithm:
the logarithm of a product is the sum of the logarithms of the factors:
thus:
the summation becomes:
Log(1) + Log(2^2) + .. + Log(N^N)
further simplifying:
Log(1*2^2*3^3*...*N^N)
Say we have 3 numbers N, x and y which are always >=1.
N will be greater than x and y and x will be greater than y.
Now we need to find the sum of all number between 1 and N that are divisible by either x or y.
I came up with this:
sum = 0;
for(i=1;i<=N;i++)
{
if(i%x || i%y)
sum += i;
}
Is there a way better way of finding the sum avoiding the for loop?
I've been pounding my head for many days now but have not got anything better.
If the value of N has a upper limit we can use a lookup method to speedup the process.
Thanks everyone.
I wanted a C/C++ based solution. Is there a built-in function to do this? Or do I have to code the algorithm?
Yes. You can void the for loop altogether and find the sum in constant time.
According to the Inclusion–exclusion principle summing up the multiples of x and multiples of y and subtracting the common multiple(s) that got added twice should give us the required sum.
Required Sum = sum of ( multiples of x that are <= N ) +
sum of ( multiples of y that are <= N ) -
sum of ( multiples of (x*y) that are <= N )
Example:
N = 15
x = 3
y = 4
Required sum = ( 3 + 6 + 9 + 12 + 15) + // multiples of 3
( 4 + 8 + 12 ) - // multiples of 4
( 12 ) // multiples of 12
As seen above we had to subtract 12 as it got added twice because it is a common multiple.
How is the entire algorithm O(1)?
Let sum(x, N) be sum of multiples of x which are less than or equal to N.
sum(x,N) = x + 2x + ... + floor(N/x) * x
= x * ( 1 + 2 + ... + floor(N/x) )
= x * ( 1 + 2 + ... + k) // Where k = floor(N/x)
= x * k * (k+1) / 2 // Sum of first k natural num = k*(k+1)/2
Now k = floor(N/x) can be computed in constant time.
Once k is known sum(x,N) can be computed in constant time.
So the required sum can also be computed in constant time.
EDIT:
The above discussion holds true only when x and y are co-primes. If not we need to use LCM(x,y) in place of x*y. There are many ways to find LCM one of which is to divide product by GCD. Now GCD cannot be computed in constant time but its time complexity can be made significantly lesser than linear time.
If a number is divisible by X, it has to be a multiple of x.
If a number is divisible by Y, it has to be a multiple of y.
I believe, if you do a for loop for all multiples of x and y, and avoid any duplicates, you should get the same answer.
Out of my head, something of the type:
sum = 0
for( i=x; i<=n; i+=x)
sum += i;
for( i=y; i<=n; i+=y)
if( y % x != 0 )
sum += i;