Given is a N*N grid.Now we need to find a good path of maximum length , where good path is defined as follow :
Good path always start from a cell marked as 0
We are only allowed to move Left,Right,Up Or Down
If the value of ith cell is say A, then value of next cell in the path must be A+1.
Now given these few conditions, I need to find out the length of maximum path that can be made. Also I need to count such paths that are of maximum length.
Example : Let N=3 and we have 3*3 matrix as follow :
0 3 2
3 0 1
2 1 0
Then maximum good path length here is 3 and the count of such good paths is 4.
0 3 2
3 0 1
2 1 0
0 3 2
3 0 1
2 1 0
0 3 2
3 0 1
2 1 0
0 3 2
3 0 1
2 1 0
This problem is a variation of Longest Path Problem, however your restrictions make this problem much easier, since the graph is actually a Directed Acyclic Graph (DAG), and thus the problem is solveable efficiently.
Define the directed graph G=(V,E) as following:
V = { all cells in the matrix} (sanity check: |V| = N^2)
E = { (u,v) | u is adjacent to v AND value(u) + 1 = value(v) }
Note that the resulting graph from the above definition is a DAG, because you cannot have any cycles, since it will result in having some edge e= (u,v) such that value(u) > value(v).
Now, you only need to find longest path in a DAG from any starting point. This is done by topological sort on the graph, and then using Dynamic Programming:
init:
for every source in the DAG:
D(v) = 0 if value(v) = 0
-infinity otherwise
step:
for each node v from first to last (according to topological sort)
D(v) = max{D(u) + 1 | for each edge (u,v) }
When you are done, find the node v with maximal value D(v), this is the length of the longest "good path".
Finding the path itself is done by rerolling the above, retracing your steps back from the maximal D(v) until you reach back the initial node with value 0.
Complexity of this approach is O(V+E) = O(n^2)
Since you are looking for the number of longest paths, you can modify this solution a bit to count the number of paths reached to each node, as follows:
Topological sort the nodes, let the sorted array be arr (1)
For each node v from start to end of arr:
if value(v) = 0:
set D(v) = 1
else
sum = 0
for each u such that (u,v) is an edge: (2)
sum = sum + D(u)
D(v) = sum
The above will find you for each node v the number of "good paths" D(v) that reaches it. All you have to do now, is find the maximal value x that has sum node v such that value(v) = x and D(v) > 0, and sum the number of paths reaching any node with value(v):
max = 0
numPaths = 0
for each node v:
if value(v) == max:
numPaths = numPaths + D(v)
else if value(v) > max AND D(v) > 0:
numPaths = D(v)
max = value(v)
return numPaths
Notes:
(1) - a "regular" sort works here to, but it will take O(n^2logn) time, and topological sort takes O(n^2) time
(2) Reminder, (u,v) is an edge if: (1) u and v are adjacent (2) value(u) + 1 = value(v)
You can do this with a simple Breadth-First Search.
First find all cells marked 0. (This is O(N2).) On each such cell put a walker. Each walker carries a number 'p' initialized to 1.
Now iterate:
All walkers stand on cells with the same number k. Each walker looks for neighboring cells (left, right, up or down) marked with k+1.
If no walker sees such a cell, the search is over. The length of the longest path is k, and the number of such paths is the sum of the p's of all the walkers.
If some walkers see such numbers, kill any walkers that don't.
Each walker moves into a good neighboring cell. If a walker sees more than one good cell, it divides into as many walkers as there are good cells, and one goes into each. (Each "child" has the same p value its "parent" had.) If two or more walkers meet in the same cell (i.e. if more than one path led to that cell) then they combine into a single walker, whose 'p' value is the sum of their 'p' values.
This algorithm is O(N2), since no cell can be visited more than once, and the number of walkers cannot exceed the number of cells.
I did it using ActionScript, hope it's readable. I think it is working correctly but I may have missed something.
const N:int = 9; // field size
const MIN_VALUE:int = 0; // start value
var field:Array = [];
// create field - not relevant to the task
var probabilities:Array = [0,1,2,3,4,5];
for (var i:int = 0; i < N * N; i++) field.push(probabilities[int(Math.random() * probabilities.length)]);//RANGE));
print_field();
// initial chain fill. We will find any chains of adjacent 0-1 elements.
var chain_list:Array = [];
for (var offset:int = 0; offset < N * N - 1; offset++) {
if (offset < N * N - N) { // y coordinate is not the lowest
var chain:Array = find_chain(offset, offset + N, MIN_VALUE);
if (chain) chain_list.push(chain);
}
if ((offset % N) < N - 1) { // x coordinate is not the rightmost
chain = find_chain(offset, offset + 1, MIN_VALUE);
if (chain) chain_list.push(chain);
}
}
var merged_chain_list:Array = chain_list;
var current_value:int = MIN_VALUE + 1;
// for each found chain, scan its higher end for more attached chains
// and merge them into new chain if found
while(chain_list.length) {
chain_list = [];
for (i = 0; i < merged_chain_list.length; i++) {
chain = merged_chain_list[i];
offset = chain[chain.length - 1];
if (offset < N * N - N) {
var tmp:Array = find_chain(offset, offset + N, current_value);
if (tmp) chain_list.push(merge_chains(chain, tmp));
}
if (offset > N) {
tmp = find_chain(offset, offset - N, current_value);
if (tmp) chain_list.push(merge_chains(chain, tmp));
}
if ((offset % N) < N - 1) {
tmp = find_chain(offset, offset + 1, current_value);
if (tmp) chain_list.push(merge_chains(chain, tmp));
}
if (offset % N) {
tmp = find_chain(offset, offset - 1, current_value);
if (tmp) chain_list.push(merge_chains(chain, tmp));
}
}
//save the last merged result if any and try the next value
if (chain_list.length) {
merged_chain_list = chain_list;
current_value++;
}
}
// final merged list is a list of chains of a same maximum length
print_chains(merged_chain_list);
function find_chain(offset1, offset2, current_value):Array {
// returns always sorted sorted from min to max
var v1:int = field[offset1];
var v2:int = field[offset2];
if (v1 == current_value && v2 == current_value + 1) return [offset1, offset2];
if (v2 == current_value && v1 == current_value + 1) return [offset2, offset1];
return null;
}
function merge_chains(chain1:Array, chain2:Array):Array {
var tmp:Array = [];
for (var i:int = 0; i < chain1.length; i++) tmp.push(chain1[i]);
tmp.push(chain2[1]);
return tmp;
}
function print_field():void {
for (var pos_y:int = 0; pos_y < N; pos_y++) {
var offset:int = pos_y * N;
var s:String = "";
for (var pos_x:int = 0; pos_x < N; pos_x++) {
var v:int = field[offset++];
if (v == 0) s += "[0]"; else s += " " + v + " ";
}
trace(s);
}
}
function print_chains(chain_list):void {
var cl:int = chain_list.length;
trace("\nchains found: " + cl);
if (cl) trace("chain length: " + chain_list[0].length);
for (var i:int = 0; i < cl; i++) {
var chain:Array = chain_list[i];
var s:String = "";
for (var j:int = 0; j < chain.length; j++) s += chain[j] + ":" + field[chain[j]] + " ";
trace(s);
}
}
Sample output:
1 2 1 3 2 2 3 2 4
4 3 1 2 2 2 [0][0] 1
[0][0] 1 2 4 [0] 3 3 1
[0][0] 5 4 1 1 [0][0] 1
2 2 3 4 3 2 [0] 1 5
4 [0] 3 [0] 3 1 4 3 1
1 2 2 3 5 3 3 3 2
3 4 2 1 2 4 4 4 5
4 2 1 2 2 3 4 5 [0]
chains found: 2
chain length: 5
23:0 32:1 41:2 40:3 39:4
33:0 32:1 41:2 40:3 39:4
I implemented it in my own Lisp dialect, so the source code is not going to help you that much :-) ...
EDIT: Added a Python version too.
anyway the idea is:
write a function paths(i, j) --> (maxlen, number) that returns maximal length of paths starting from (i, j) and how many of them are present..
this function is recursive and looking at neighbors of (i, j) with value M[i][j]+1 will call paths(ni, nj) to get the result for valid neighbors
if the maximal length for a neighbor is bigger than current maximal length you set a new current maximal length and reset the counter
if the maximal length is the same as current then add the counter to the total
if the maximal length is smaller just ignore that neighbor result
cache the result of the computation for the cell (this is very important!). In my version the code is split in two mutually recursive functions: paths that checks the cache first and calls compute-paths otherwise; compute-paths calls paths when processing neighbors. The caching of a recursive call is roughly equivalent to an explicit Dynamic Programming approach, but sometimes easier to implement.
To compute the final result you basically do the same computation but adding up the result for all 0 cells instead of considering neighbors.
Note that the number of different paths can become huge, and that's why enumerating all of them is not a viable option and caching/DP is a must: for example for a N=20 matrix with values M[i][j] = i+j there are 35,345,263,800 maximal paths of length 38.
This algorithm is O(N^2) in time (each cell is visited at most once) and requires O(N^2) space for the cache and for the recursion. Of course you cannot expect to get anything better than this given that the input is composed of N^2 numbers itself and you need at least to read them to compute an answer.
(defun good-paths (matrix)
(let** ((N (length matrix))
(cache (make-array (list N N)))
(#'compute-paths (i j)
(let ((res (list 0 1))
(count (1+ (aref matrix i j))))
(dolist ((ii jj) (list (list (1+ i) j) (list (1- i) j)
(list i (1+ j)) (list i (1- j))))
(when (and (< -1 ii N) (< -1 jj N)
(= (aref matrix ii jj) count))
(let (((maxlen num) (paths ii jj)))
(incf maxlen)
(cond
((< (first res) maxlen)
(setf res (list maxlen num)))
((= (first res) maxlen)
(incf (second res) num))))))
res))
(#'paths (i j)
(first (or (aref cache i j)
(setf (aref cache i j)
(list (compute-paths i j))))))
(res (list 0 0)))
(dotimes (i N)
(dotimes (j N)
(when (= (aref matrix i j) 0)
(let (((maxlen num) (paths i j)))
(cond
((< (first res) maxlen)
(setf res (list maxlen num)))
((= (first res) maxlen)
(incf (second res) num)))))))
res))
Edit
The following is a transliteration of the above in Python, that should be much easier to understand if you never saw Lisp before...
def good_paths(matrix):
N = len(matrix)
cache = [[None]*N for i in xrange(N)] # an NxN matrix of None
def compute_paths(i, j):
maxlen, num = 0, 1
count = 1 + matrix[i][j]
for (ii, jj) in ((i+1, j), (i-1, j), (i, j-1), (i, j+1)):
if 0 <= ii < N and 0 <= jj < N and matrix[ii][jj] == count:
nh_maxlen, nh_num = paths(ii, jj)
nh_maxlen += 1
if maxlen < nh_maxlen:
maxlen = nh_maxlen
num = nh_num
elif maxlen == nh_maxlen:
num += nh_num
return maxlen, num
def paths(i, j):
res = cache[i][j]
if res is None:
res = cache[i][j] = compute_paths(i, j)
return res
maxlen, num = 0, 0
for i in xrange(N):
for j in xrange(N):
if matrix[i][j] == 0:
c_maxlen, c_num = paths(i, j)
if maxlen < c_maxlen:
maxlen = c_maxlen
num = c_num
elif maxlen == c_maxlen:
num += c_num
return maxlen, num
Related
I have a number n and I have to split it into k numbers such that all k numbers are distinct, the sum of the k numbers is equal to n and k is maximum. Example if n is 9 then the answer should be 1,2,6. If n is 15 then answer should be 1,2,3,4,5.
This is what I've tried -
void findNum(int l, int k, vector<int>& s)
{
if (k <= 2 * l) {
s.push_back(k);
return;
}
else if (l == 1) {
s.push_back(l);
findNum(l + 1, k - 1, s);
}
else if(l == 2) {
s.push_back(l);
findNum(l + 2, k - 2, s);
}
else{
s.push_back(l);
findNum(l + 1, k - l, s);
}
}
Initially k = n and l = 1. Resulting numbers are stored in s. This solution even though returns the number n as a sum of k distinct numbers but it is the not the optimal solution(k is not maximal). Example output for n = 15 is 1,2,4,8. What changes should be made to get the correct result?
Greedy algorithm works for this problem. Just start summing up from 1 to m such that sum(1...m) <= n. As soon as it exceeds, add the excess to m-1. Numbers from 1 upto m|m-1 will be the answer.
eg.
18
1+2+3+4+5 < 18
+6 = 21 > 18
So, answer: 1+2+3+4+(5+6-(21-18))
28
1+2+3+4+5+6+7 = 28
So, answer: 1+2+3+4+5+6+7
Pseudocode (in constant time, complexity O(1))
Find k such that, m * (m+1) > 2 * n
Number of terms = m-1
Terms: 1,2,3...m-2,(m-1 + m - (sum(1...m) - n))
sum can be partitionned into k terms in {1, ... , m} if min(k) <= sum <= max(k,m), with
min(k) = 1 + 2 + .. + k = (k*(k+1))/2
max(k,m) = m + (m-1) + .. + (m-k+1) = k*m - (k*(k-1))/2
So, you can use the following pseudo-code:
fn solve(n, k, sum) -> set or error
s = new_set()
for m from n down to 1:
# will the problem be solvable if we add m to s?
if min(k-1) <= sum-m <= max(k-1, m-1) then
s.add(m), sum-=m, k-=1
if s=0 and k=0 then s else error()
I am trying to attempt Dijkstra's with an Adjacency list, I can't figure out why I'm failing the test cases.
Node * n = list[source].head;
while(n)
{
q.push(n);
v[n->b] = n->w;
n = n->next;
}
while(!q.empty())
{
n = q.front();
i = n->b;
o = list[i].head;
q.pop();
while(o)
{
if(!v[o->b])
{
q.push(o);
v[o->b] = v[i] + o->w;
}
else if(v[o->b] > v[i] + o->w)
{
v[o->b] = v[i] + o->w;
}
o = o->next;
}
}
i = 0;
while(i < vertices)
{
if(i != node)
printf("%d ", v[i] ? v[i] : -1);
i++;
}
cout<<"\n";
I am passing trivial test cases.
Example Input: (x y w),
1 2 3,
1 3 4,
1 4 5,
3 5 101,
Source is 1.
Output:
3 4 5 5
Example 2:
1 2 24
1 4 20
3 1 3
4 3 12
Source is 1.
Output: 24 3 15
However, I am failing the more sophisticated test cases.
It seems you are confusing the two arrays - one for which vertex is already visited, and one for the optimal special distances(i.e. optimal distance to the vertices found so far). Let's denote the visited array with v and the optimal distance array with dist.
In this statement:
if(v[o->b] > v[i] + o->w)
You need to be using dist instead of v.
After you pop a node you need to check if it is visited. If it is visited, continue on to the next node. Otherwise mark it as visited and execute the remaining logic.
I saw a question on careercup, but I do not get the answer I want there. I wrote an answer myself and want your comment on my analysis of time complexity and comment on the algorithm and code. Or you could provide a better algorithm in terms of time. Thanks.
You are given d > 0 fair dice with n > 0 "sides", write an function that returns a histogram of the frequency of the result of dice rolls.
For example, for 2 dice, each with 3 sides, the results are:
(1, 1) -> 2
(1, 2) -> 3
(1, 3) -> 4
(2, 1) -> 3
(2, 2) -> 4
(2, 3) -> 5
(3, 1) -> 4
(3, 2) -> 5
(3, 3) -> 6
And the function should return:
2: 1
3: 2
4: 3
5: 2
6: 1
(my sol). The time complexity if you use a brute force depth first search is O(n^d). However, you can use the DP idea to solve this problem. For example, d=3 and n=3. You can use the result of d==1 when computing d==2:
d==1
num #
1 1
2 1
3 1
d==2
first roll second roll is 1
num # num #
1 1 2 1
2 1 -> 3 1
3 1 4 1
first roll second roll is 2
num # num #
1 1 3 1
2 1 -> 4 1
3 1 5 1
first roll second roll is 3
num # num #
1 1 4 1
2 1 -> 5 1
3 1 6 1
Therefore,
second roll
num #
2 1
3 2
4 3
5 2
6 1
The time complexity of this DP algorithm is
SUM_i(1:d) {n*[n(d-1)-(d-1)+1]} ~ O(n^2*d^2)
~~~~~~~~~~~~~~~ <--eg. d=2, n=3, range from 2~6
The code is written in C++ as follows
vector<pair<int,long long>> diceHisto(int numSide, int numDice) {
int n = numSide*numDice;
vector<long long> cur(n+1,0), nxt(n+1,0);
for(int i=1; i<=numSide; i++) cur[i]=1;
for(int i=2; i<=numDice; i++) {
int start = i-1, end = (i-1)*numSide; // range of previous sum of rolls
//cout<<"start="<<start<<" end="<<end<<endl;
for(int j=1; j<=numSide; j++) {
for(int k=start; k<=end; k++)
nxt[k+j] += cur[k];
}
swap(cur,nxt);
for(int j=start; j<=end; j++) nxt[j]=0;
}
vector<pair<int,long long>> result;
for(int i=numDice; i<=numSide*numDice; i++)
result.push_back({i,cur[i]});
return result;
}
You can do it in O(n*d^2). First, note that the generating function for an n-sided dice is p(n) = x+x^2+x^3+...+x^n, and that the distribution for d throws has generating function p(n)^d. Representing the polynomials as arrays, you need O(nd) coefficients, and multiplying by p(n) can be done in a single pass in O(nd) time by keeping a rolling sum.
Here's some python code that implements this. It has one non-obvious optimisation: it throws out a factor x from each p(n) (or equivalently, it treats the dice as having faces 0,1,2,...,n-1 rather than 1,2,3,...,n) which is why d is added back in when showing the distribution.
def dice(n, d):
r = [1] + [0] * (n-1) * d
nr = [0] * len(r)
for k in xrange(d):
t = 0
for i in xrange(len(r)):
t += r[i]
if i >= n:
t -= r[i-n]
nr[i] = t
r, nr = nr, r
return r
def show_dist(n, d):
for i, k in enumerate(dice(n, d)):
if k: print i + d, k
show_dist(6, 3)
The time and space complexity are easy to see: there's nested loops with d and (n-1)*d iterations so the time complexity is O(n.d^2), and there's two arrays of size O(nd) and no other allocation, so the space complexity is O(nd).
Just in case, here a simple example in Python using the OpenTurns platform.
import openturns as ot
d = 2 # number of dice
n = 6 # number of sides per die
# possible values
dice_distribution = ot.UserDefined([[i] for i in range(1, n + 1)])
# create the sum distribution d times the sum
sum_distribution = sum([dice_distribution] * d)
That's it!
print(sum_distribution)
will show you all the possible values and their corresponding probabilities:
>>> UserDefined(
{x = [2], p = 0.0277778},
{x = [3], p = 0.0555556},
{x = [4], p = 0.0833333},
{x = [5], p = 0.111111},
{x = [6], p = 0.138889},
{x = [7], p = 0.166667},
{x = [8], p = 0.138889},
{x = [9], p = 0.111111},
{x = [10], p = 0.0833333},
{x = [11], p = 0.0555556},
{x = [12], p = 0.0277778}
)
You can also draw the probability distribution function:
sum_distribution.drawPDF()
I am fairly new to C++, and am struggling through a problem that seems to have a solid solution but I just can't seem to find it. I have a contiguous array of ints starting at zero:
int i[6] = { 0, 1, 2, 3, 4, 5 }; // this is actually from an iterator
I would like to partition the array into groups of three. The design is to have two methods, j and k, such that given an i they will return the other two elements from the same group of three. For example:
i j(i) k(i)
0 1 2
1 0 2
2 0 1
3 4 5
4 3 5
5 3 4
The solution seems to involve summing the i with its value mod three and either plus or minus one, but I can't quite seem to work out the logic.
This should work:
int d = i % 3;
int j = i - d + ( d == 0 );
int k = i - d + 2 - ( d == 2 );
or following statement for k could be more readable:
int k = i - d + ( d == 2 ? 1 : 2 );
This should do it:
int j(int i)
{
int div = i / 3;
if (i%3 != 0)
return 3*div;
else
return 3*div+1;
}
int k(int i)
{
int div = i / 3;
if (i%3 != 2)
return 3*div+2;
else
return 3*div+1;
}
Test.
If you want shorter functions:
int j(int i)
{
return i/3*3 + (i%3 ? 0 : 1);
}
int k(int i)
{
return i/3*3 + (i%3-2 ? 2 : 1);
}
Well, first, notice that
j(i) == j(3+i) == j(6+i) == j(9+i) == ...
k(i) == k(3+i) == k(6+i) == k(9+i) == ...
In other words, you only need to find a formula for
j(i), i = 0, 1, 2
k(i), i = 0, 1, 2
and then for the rest of the cases simply plug in i mod 3.
From there, you'll have trouble finding a simple formula because your "rotation" isn't standard. Instead of
i j(i) k(i)
0 1 2
1 2 0
2 0 1
for which the formula would have been
j(i) = (i + 1) % 3
k(i) = (i + 2) % 3
you have
i j(i) k(i)
0 1 2
1 0 1
2 0 2
for which the only formula I can think of at the moment is
j(i) = (i == 0 ? 1 : 0)
k(i) = (i == 1 ? 1 : 2)
If the values of your array (let's call it arr, not i in order to avoid confusion with the index i) do not coincide with their respective index, you have to perform a reverse lookup to figure out their index first. I propose using an std::map<int,size_t> or an std::unordered_map<int,size_t>.
That structure reflects the inverse of arr and you can extra the index for a particular value with its subscript operator or the at member function. From then, you can operate purely on the indices, and use modulo (%) to access the previous and the next element as suggested in the other answers.
I have a big matrix as input, and I have the size of a smaller matrix. I have to compute the sum of all possible smaller matrices which can be formed out of the bigger matrix.
Example.
Input matrix size: 4 × 4
Matrix:
1 2 3 4
5 6 7 8
9 9 0 0
0 0 9 9
Input smaller matrix size: 3 × 3 (not necessarily a square)
Smaller matrices possible:
1 2 3
5 6 7
9 9 0
5 6 7
9 9 0
0 0 9
2 3 4
6 7 8
9 0 0
6 7 8
9 0 0
0 9 9
Their sum, final output
14 18 22
29 22 15
18 18 18
I did this:
int** matrix_sum(int **M, int n, int r, int c)
{
int **res = new int*[r];
for(int i=0 ; i<r ; i++) {
res[i] = new int[c];
memset(res[i], 0, sizeof(int)*c);
}
for(int i=0 ; i<=n-r ; i++)
for(int j=0 ; j<=n-c ; j++)
for(int k=i ; k<i+r ; k++)
for(int l=j ; l<j+c ; l++)
res[k-i][l-j] += M[k][l];
return res;
}
I guess this is too slow, can anyone please suggest a faster way?
Your current algorithm is O((m - p) * (n - q) * p * q). The worst case is when p = m / 2 and q = n / 2.
The algorithm I'm going to describe will be O(m * n + p * q), which will be O(m * n) regardless of p and q.
The algorithm consists of 2 steps.
Let the input matrix A's size be m x n and the size of the window matrix being p x q.
First, you will create a precomputed matrix B of the same size as the input matrix. Each element of the precomputed matrix B contains the sum of all the elements in the sub-matrix, whose top-left element is at coordinate (1, 1) of the original matrix, and the bottom-right element is at the same coordinate as the element that we are computing.
B[i, j] = Sum[k = 1..i, l = 1..j]( A[k, l] ) for all 1 <= i <= m, 1 <= j <= n
This can be done in O(m * n), by using this relation to compute each element in O(1):
B[i, j] = B[i - 1, j] + Sum[k = 1..j-1]( A[i, k] ) + A[j] for all 2 <= i <= m, 1 <= j <= n
B[i - 1, j], which is everything of the sub-matrix we are computing except the current row, has been computed previously. You keep a prefix sum of the current row, so that you can use it to quickly compute the sum of the current row.
This is another way to compute B[i, j] in O(1), using the property of the 2D prefix sum:
B[i, j] = B[i - 1, j] + B[i, j - 1] - B[i - 1, j - 1] + A[j] for all 1 <= i <= m, 1 <= j <= n and invalid entry = 0
Then, the second step is to compute the result matrix S whose size is p x q. If you make some observation, S[i, j] is the sum of all elements in the matrix size (m - p + 1) * (n - q + 1), whose top-left coordinate is (i, j) and bottom-right is (i + m - p + 1, j + n - q + 1).
Using the precomputed matrix B, you can compute the sum of any sub-matrix in O(1). Apply this to compute the result matrix S:
SubMatrixSum(top-left = (x1, y1), bottom-right = (x2, y2))
= B[x2, y2] - B[x1 - 1, y2] - B[x2, y1 - 1] + B[x1 - 1, y1 - 1]
Therefore, the complexity of the second step will be O(p * q).
The final complexity is as mentioned above, O(m * n), since p <= m and q <= n.