I want to find out the number of all permutation of nnumber.Number will be from 1 to n.The given condition is that each ithposition can have number up to Si,where Si is given for each position of number.
1<=n<=10^6
1<=si<=n
For example:
n=5
then its all five element will be
1,2,3,4,5
and given Si for each position is as:
2,3,4,5,5
It shows that at:
1st position can have 1 to 2that is 1,2 but can not be number among 3 to 5.
Similarly,
At 2nd position can have number 1 to 3 only.
At 3rd position can have number 1 to 4 only.
At 4th position can have number 1 to 5 only.
At 5th position can have number 1 to 5 only.
Some of its permutation are:
1,2,3,4,5
2,3,1,4,5
2,3,4,1,5 etc.
But these can not be:
3,1,4,2,5 As 3 is present at 1st position.
1,2,5,3,4 As 5 is present at 3rd position.
I am not getting any idea to count all possible number of permutations with given condition.
Okay, if we have a guarantee that numbers si are given in not descending order then looks like it is possible to calculate the number of permutations in O(n).
The idea of straightforward algorithm is as follows:
At step i multiply the result by current value of si[i];
We chose some number for position i. As long as we need permutation, that number cannot be repeated, so decrement all the rest si[k] where k from i+1 to the end (e.g. n) by 1;
Increase i by 1, go back to (1).
To illustrate on example for si: 2 3 3 4:
result = 1;
current si is "2 3 3 4", result *= si[0] (= 1*2 == 2), decrease 3, 3 and 4 by 1;
current si is "..2 2 3", result *= si[1] (= 2*2 == 4), decrease last 2 and 3 by 1;
current si is "....1 2", result *= si[2] (= 4*1 == 4), decrease last number by 1;
current si is "..... 1", result *= si[3] (= 4*1 == 4), done.
Hovewer this straightforward approach would require O(n^2) due to decreasing steps. To optimize it we can easily observe that at the moment of result *= si[i] our si[i] was already decreased exactly i times (assuming we start from 0 of course).
Thus O(n) way:
unsigned int result = 1;
for (unsigned int i = 0; i < n; ++i)
{
result *= (si[i] - i);
}
for each si count the number of element in your array such that a[i] <= si using binary search, and store the value to an array count[i], now the answer is the product of all count[i], however we have decrease the number of redundancy from the answer ( as same number could be count twice ), for that you can sort si and check how many number is <= s[i], then decrease that number from each count,the complexity is O(nlog(n)), hope at least I give you an idea.
To complete Yuriy Ivaskevych answer, if you don't know if the sis are in increasing order, you can sort the sis and it will also works.
And the result will be null or negative if the permutations are impossible (ex: 1 1 1 1 1)
You can try backtracking, it's a little hardcore approach but will work.
try:
http://www.thegeekstuff.com/2014/12/backtracking-example/
or google backtracking tutorial C++
Related
Working on a business class assignment where we're using Excel to solve a problem with the following setup and conditions, but I wanted to find solutions by writing some code in C++ which is what I'm most familiar from some school courses.
We have 4 stores where we need to invest 10 million dollars. The main conditions are:
It is necessary to invest at least 1mil per store.
The investments in the 4 stores must total 10 million.
Following the rules above, the most one can invest in a single store is 7 million
Each store has its own unique return of investment percentages based off the amount of money invested per store.
In other words, there is a large number of combinations that can be obtained by investing in each store. Repetition of numbers does not matter as long as the total is 10 per combination, but the order of the numbers does matter.
If my math is right, the total number of combinations is 7^4 = 2401, but the number of working solutions
is lesser due to the condition that each combination must equal 10 as a sum.
What I'm trying to do in C++ is use loops to populate each row with 4 numbers such that their sum equals 10 (millions), for example:
7 1 1 1
1 7 1 1
1 1 7 1
1 1 1 7
6 2 1 1
6 1 2 1
6 1 1 2
5 3 1 1
5 1 3 1
5 1 1 3
5 1 2 2
5 2 1 2
5 2 2 1
I'd appreciate advice on how to tackle this. Still not quite sure if using loops is a good idea whilst using an array (2D Array/Vector perhaps?) I've a vague idea that maybe recursive functions would facilitate a solution.
Thanks for taking some time to read, I appreciate any and all advice for coming up with solutions.
Edit:
Here's some code I worked on to just get 50 rows of numbers randomized. Still have to implement the conditions where valid row combinations must be the sum total of 10 between the 4;
int main(){
const int rows = 50;
int values[rows][4];
for (int i = 0; i < 50; i++) {
for (int j = 0; j <= 3; j++){
values[i][j]= (rand() % 7 + 1);
cout << values[i][j] << " ";
}
cout << endl;
}
}
You can calculate this recursively. For each level, you have:
A target sum
The number of elements in that level
The minimum value each individual element can have
First, we determine our return type. What's your final output? Looks like a vector of vectors to me. So our recursive function will return a the same.
Second, we determine the result of our degenerate case (at the "bottom" of the recursion), when the number of elements in this level is 1.
std::vector<std::vector<std::size_t>> recursive_combinations(std::size_t sum, std::size_t min_val, std::size_t num_elements)
{
std::vector<std::vector<std::size_t>> result {};
if (num_elements == 1)
{
result.push_back(std::vector<std::size_t>{sum});
return result;
}
...non-degenerate case goes here...
return result;
}
Next, we determine what happens when this level has more than 1 element in it. Split the sum into all possible pairs of the "first" element and the "remaining" group. e.g., if we have a target sum of 5, 3 num_elements, and a min_val of 1, we'd generate the pairs {1,4}, {2,3}, and {3,2}, where the first number in each pair is for the first element, and the second number in each pair is the remaining sum left over for the remaining group.
Recursively call the recursive_combinations function using this second number as the new sum, and num_elements - 1 as the new num_elements to find the vector of vectors for the remaining group, and for each vector in the return vector, append the first element from the above set.
From a given array (call it numbers[]), i want another array (results[]) which contains all sum possibilities between elements of the first array.
For example, if I have numbers[] = {1,3,5}, results[] will be {1,3,5,4,8,6,9,0}.
there are 2^n possibilities.
It doesn't matter if a number appears two times because results[] will be a set
I did it for sum of pairs or triplet, and it's very easy. But I don't understand how it works when we sum 0, 1, 2 or n numbers.
This is what I did for pairs :
std::unordered_set<int> pairPossibilities(std::vector<int> &numbers) {
std::unordered_set<int> results;
for(int i=0;i<numbers.size()-1;i++) {
for(int j=i+1;j<numbers.size();j++) {
results.insert(numbers.at(i)+numbers.at(j));
}
}
return results;
}
Also, assuming that the numbers[] is sorted, is there any possibility to sort results[] while we fill it ?
Thanks!
This can be done with Dynamic Programming (DP) in O(n*W) where W = sum{numbers}.
This is basically the same solution of Subset Sum Problem, exploiting the fact that the problem has optimal substructure.
DP[i, 0] = true
DP[-1, w] = false w != 0
DP[i, w] = DP[i-1, w] OR DP[i-1, w - numbers[i]]
Start by following the above solution to find DP[n, sum{numbers}].
As a result, you will get:
DP[n , w] = true if and only if w can be constructed from numbers
Following on from the Dynamic Programming answer, You could go with a recursive solution, and then use memoization to cache the results, top-down approach in contrast to Amit's bottom-up.
vector<int> subsetSum(vector<int>& nums)
{
vector<int> ans;
generateSubsetSum(ans,0,nums,0);
return ans;
}
void generateSubsetSum(vector<int>& ans, int sum, vector<int>& nums, int i)
{
if(i == nums.size() )
{
ans.push_back(sum);
return;
}
generateSubsetSum(ans,sum + nums[i],nums,i + 1);
generateSubsetSum(ans,sum,nums,i + 1);
}
Result is : {9 4 6 1 8 3 5 0} for the set {1,3,5}
This simply picks the first number at the first index i adds it to the sum and recurses. Once it returns, the second branch follows, sum, without the nums[i] added. To memoize this you would have a cache to store sum at i.
I would do something like this (seems easier) [I wanted to put this in comment but can't write the shifting and removing an elem at a time - you might need a linked list]
1 3 5
3 5
-----
4 8
1 3 5
5
-----
6
1 3 5
3 5
5
------
9
Add 0 to the list in the end.
Another way to solve this is create a subset arrays of vector of elements then sum up each array's vector's data.
e.g
1 3 5 = {1, 3} + {1,5} + {3,5} + {1,3,5} after removing sets of single element.
Keep in mind that it is always easier said than done. A single tiny mistake along the implemented algorithm would take a lot of time in debug to find it out. =]]
There has to be a binary chop version, as well. This one is a bit heavy-handed and relies on that set of answers you mention to filter repeated results:
Split the list into 2,
and generate the list of sums for each half
by recursion:
the minimum state is either
2 entries, with 1 result,
or 3 entries with 3 results
alternatively, take it down to 1 entry with 0 results, if you insist
Then combine the 2 halves:
All the returned entries from both halves are legitimate results
There are 4 additional result sets to add to the output result by combining:
The first half inputs vs the second half inputs
The first half outputs vs the second half inputs
The first half inputs vs the second half outputs
The first half outputs vs the second half outputs
Note that the outputs of the two halves may have some elements in common, but they should be treated separately for these combines.
The inputs can be scrubbed from the returned outputs of each recursion if the inputs are legitimate final results. If they are they can either be added back in at the top-level stage or returned by the bottom level stage and not considered again in the combining.
You could use a bitfield instead of a set to filter out the duplicates. There are reasonably efficient ways of stepping through a bitfield to find all the set bits. The max size of the bitfield is the sum of all the inputs.
There is no intelligence here, but lots of opportunity for parallel processing within the recursion and combine steps.
A list partially ordered of n numbers is given and I have to find those numbers that does not follow the order (just find them and count them).
There are no repeated numbers.
There are no negative numbers.
MAX = 100000 is the capacity of the list.
n, the number of elements in the list, is given by the user.
Example of two lists:
1 2 5 6 3
1 6 2 9 7 4 8 10 13
For the first list the output is 2 since 5 and 6 should be both after 3, they are unordered; for the second the output is 3 since 6, 9 and 7 are out of order.
The most important condition in this problem: do the searching in a linear way O(n) or being quadratic the worst case.
Here is part of the code I developed (however it is no valid since it is a quadratic search).
"unordered" function compares each element of the array with the one given by "minimal" function; if it finds one bigger than the minimal, that element is unordered.
int unordered (int A[MAX], int n)
int cont = 0;
for (int i = 0; i < n-1; i++){
if (A[i] > minimal(A, n, i+1)){
count++;
}
}
return count;
"minimal" function takes the minimal of all the elements in the list between the one which is being compared in "unordered" function and the last of the list. i < elements <= n . Then, it is returned to be compared.
int minimal (int A[MAX], int n, int index)
int i, minimal = 99999999;
for (i = index; i < n; i++){
if (A[i] <= minimo)
minimal = A[i];
}
return minimal;
How can I do it more efficiently?
Start on the left of the list and compare the current number you see with the next one. Whenever the next is smaller than the current remove the current number from the list and count one up. After removing a number at index 'n' set your current number to index 'n-1' and go on.
Because you remove at most 'n' numbers from the list and compare the remaining in order, this Algorithmus in O(n).
I hope this helps. I must admit though that the task of finding numbers that are out of of order isn't all that clear.
If O(n) space is no problem, you can first do a linear run (backwards) over the array and save the minimal value so far in another array. Instead of calling minimal you can then look up the minimum value in O(1) and your approach works in O(n).
Something like this:
int min[MAX]; //or: int *min = new int[n];
min[n-1] = A[n-1];
for(int i = n-2; i >= 0; --i)
min[i] = min(A[i], min[i+1]);
Can be done in O(1) space if you do the first loop backwards because then you only need to remember the current minimum.
Others have suggested some great answers, but I have an extra way you can think of this problem. Using a stack.
Here's how it helps: Push the leftmost element in the array onto the stack. Keep doing this until the element you are currently at (on the array) is less than top of the stack. While it is, pop elements and increment your counter. Stop when it is greater than top of the stack and push it in. In the end, when all array elements are processed you'll get the count of those that are out of order.
Sample run: 1 5 6 3 7 4 10
Step 1: Stack => 1
Step 2: Stack => 1 5
Step 3: Stack => 1 5 6
Step 4: Now we see 3 is in. While 3 is less than top of stack, pop and increment counter. We get: Stack=> 1 3 -- Count = 2
Step 5: Stack => 1 3 7
Step 6: We got 4 now. Repeat same logic. We get: Stack => 1 3 4 -- Count = 3
Step 7: Stack => 1 3 4 10 -- Count = 3. And we're done.
This should be O(N) for time and space. Correct me if I'm wrong.
This problem was asked to me in Amazon interview -
Given a array of positive integers, you have to find the smallest positive integer that can not be formed from the sum of numbers from array.
Example:
Array:[4 13 2 3 1]
result= 11 { Since 11 was smallest positive number which can not be formed from the given array elements }
What i did was :
sorted the array
calculated the prefix sum
Treverse the sum array and check if next element is less than 1
greater than sum i.e. A[j]<=(sum+1). If not so then answer would
be sum+1
But this was nlog(n) solution.
Interviewer was not satisfied with this and asked a solution in less than O(n log n) time.
There's a beautiful algorithm for solving this problem in time O(n + Sort), where Sort is the amount of time required to sort the input array.
The idea behind the algorithm is to sort the array and then ask the following question: what is the smallest positive integer you cannot make using the first k elements of the array? You then scan forward through the array from left to right, updating your answer to this question, until you find the smallest number you can't make.
Here's how it works. Initially, the smallest number you can't make is 1. Then, going from left to right, do the following:
If the current number is bigger than the smallest number you can't make so far, then you know the smallest number you can't make - it's the one you've got recorded, and you're done.
Otherwise, the current number is less than or equal to the smallest number you can't make. The claim is that you can indeed make this number. Right now, you know the smallest number you can't make with the first k elements of the array (call it candidate) and are now looking at value A[k]. The number candidate - A[k] therefore must be some number that you can indeed make with the first k elements of the array, since otherwise candidate - A[k] would be a smaller number than the smallest number you allegedly can't make with the first k numbers in the array. Moreover, you can make any number in the range candidate to candidate + A[k], inclusive, because you can start with any number in the range from 1 to A[k], inclusive, and then add candidate - 1 to it. Therefore, set candidate to candidate + A[k] and increment k.
In pseudocode:
Sort(A)
candidate = 1
for i from 1 to length(A):
if A[i] > candidate: return candidate
else: candidate = candidate + A[i]
return candidate
Here's a test run on [4, 13, 2, 1, 3]. Sort the array to get [1, 2, 3, 4, 13]. Then, set candidate to 1. We then do the following:
A[1] = 1, candidate = 1:
A[1] ≤ candidate, so set candidate = candidate + A[1] = 2
A[2] = 2, candidate = 2:
A[2] ≤ candidate, so set candidate = candidate + A[2] = 4
A[3] = 3, candidate = 4:
A[3] ≤ candidate, so set candidate = candidate + A[3] = 7
A[4] = 4, candidate = 7:
A[4] ≤ candidate, so set candidate = candidate + A[4] = 11
A[5] = 13, candidate = 11:
A[5] > candidate, so return candidate (11).
So the answer is 11.
The runtime here is O(n + Sort) because outside of sorting, the runtime is O(n). You can clearly sort in O(n log n) time using heapsort, and if you know some upper bound on the numbers you can sort in time O(n log U) (where U is the maximum possible number) by using radix sort. If U is a fixed constant, (say, 109), then radix sort runs in time O(n) and this entire algorithm then runs in time O(n) as well.
Hope this helps!
Use bitvectors to accomplish this in linear time.
Start with an empty bitvector b. Then for each element k in your array, do this:
b = b | b << k | 2^(k-1)
To be clear, the i'th element is set to 1 to represent the number i, and | k is setting the k-th element to 1.
After you finish processing the array, the index of the first zero in b is your answer (counting from the right, starting at 1).
b=0
process 4: b = b | b<<4 | 1000 = 1000
process 13: b = b | b<<13 | 1000000000000 = 10001000000001000
process 2: b = b | b<<2 | 10 = 1010101000000101010
process 3: b = b | b<<3 | 100 = 1011111101000101111110
process 1: b = b | b<<1 | 1 = 11111111111001111111111
First zero: position 11.
Consider all integers in interval [2i .. 2i+1 - 1]. And suppose all integers below 2i can be formed from sum of numbers from given array. Also suppose that we already know C, which is sum of all numbers below 2i. If C >= 2i+1 - 1, every number in this interval may be represented as sum of given numbers. Otherwise we could check if interval [2i .. C + 1] contains any number from given array. And if there is no such number, C + 1 is what we searched for.
Here is a sketch of an algorithm:
For each input number, determine to which interval it belongs, and update corresponding sum: S[int_log(x)] += x.
Compute prefix sum for array S: foreach i: C[i] = C[i-1] + S[i].
Filter array C to keep only entries with values lower than next power of 2.
Scan input array once more and notice which of the intervals [2i .. C + 1] contain at least one input number: i = int_log(x) - 1; B[i] |= (x <= C[i] + 1).
Find first interval that is not filtered out on step #3 and corresponding element of B[] not set on step #4.
If it is not obvious why we can apply step 3, here is the proof. Choose any number between 2i and C, then sequentially subtract from it all the numbers below 2i in decreasing order. Eventually we get either some number less than the last subtracted number or zero. If the result is zero, just add together all the subtracted numbers and we have the representation of chosen number. If the result is non-zero and less than the last subtracted number, this result is also less than 2i, so it is "representable" and none of the subtracted numbers are used for its representation. When we add these subtracted numbers back, we have the representation of chosen number. This also suggests that instead of filtering intervals one by one we could skip several intervals at once by jumping directly to int_log of C.
Time complexity is determined by function int_log(), which is integer logarithm or index of the highest set bit in the number. If our instruction set contains integer logarithm or any its equivalent (count leading zeros, or tricks with floating point numbers), then complexity is O(n). Otherwise we could use some bit hacking to implement int_log() in O(log log U) and obtain O(n * log log U) time complexity. (Here U is largest number in the array).
If step 1 (in addition to updating the sum) will also update minimum value in given range, step 4 is not needed anymore. We could just compare C[i] to Min[i+1]. This means we need only single pass over input array. Or we could apply this algorithm not to array but to a stream of numbers.
Several examples:
Input: [ 4 13 2 3 1] [ 1 2 3 9] [ 1 1 2 9]
int_log: 2 3 1 1 0 0 1 1 3 0 0 1 3
int_log: 0 1 2 3 0 1 2 3 0 1 2 3
S: 1 5 4 13 1 5 0 9 2 2 0 9
C: 1 6 10 23 1 6 6 15 2 4 4 13
filtered(C): n n n n n n n n n n n n
number in
[2^i..C+1]: 2 4 - 2 - - 2 - -
C+1: 11 7 5
For multi-precision input numbers this approach needs O(n * log M) time and O(log M) space. Where M is largest number in the array. The same time is needed just to read all the numbers (and in the worst case we need every bit of them).
Still this result may be improved to O(n * log R) where R is the value found by this algorithm (actually, the output-sensitive variant of it). The only modification needed for this optimization is instead of processing whole numbers at once, process them digit-by-digit: first pass processes the low order bits of each number (like bits 0..63), second pass - next bits (like 64..127), etc. We could ignore all higher-order bits after result is found. Also this decreases space requirements to O(K) numbers, where K is number of bits in machine word.
If you sort the array, it will work for you. Counting sort could've done it in O(n), but if you think in a practically large scenario, range can be pretty high.
Quicksort O(n*logn) will do the work for you:
def smallestPositiveInteger(self, array):
candidate = 1
n = len(array)
array = sorted(array)
for i in range(0, n):
if array[i] <= candidate:
candidate += array[i]
else:
break
return candidate
I am having a Algorithm question, in which numbers are been given from 1 to N and a number of operations are to be performed and then min/max has to be found among them.
Two operations - Addition and subtraction
and operations are in the form a b c d , where a is the operation to be performed,b is the starting number and c is the ending number and d is the number to be added/subtracted
for example
suppose numbers are 1 to N
and
N =5
1 2 3 4 5
We perform operations as
1 2 4 5
2 1 3 4
1 4 5 6
By these operations we will have numbers from 1 to N as
1 7 8 9 5
-3 3 4 9 5
-3 3 4 15 11
So the maximum is 15 and min is -3
My Approach:
I have taken the lower limit and upper limit of the numbers in this case it is 1 and 5 only stored in an array and applied the operations, and then had found the minimum and maximum.
Could there be any better approach?
I will assume that all update (addition/subtraction) operations happen before finding max/min. I don't have a good solution for update and min/max operations mixing together.
You can use a plain array, where the value at index i of the array is the difference between the index i and index (i - 1) of the original array. This makes the sum from index 0 to index i of our array to be the value at index i of the original array.
Subtraction is addition with the negated number, so they can be treated similarly. When we need to add k to the original array from index i to index j, we will add k to index i of our array, and subtract k to index (j + 1) of our array. This takes O(1) time per update.
You can find the min/max of the original array by accumulating summing the values and record the max/min values. This takes O(n) time per operation. I assume this is done once for the whole array.
Pseudocode:
a[N] // Original array
d[N] // Difference array
// Initialization
d[0] = a[0]
for (i = 1 to N-1)
d[i] = a[i] - a[i - 1]
// Addition (subtraction is similar)
add(from_idx, to_idx, amount) {
d[from_idx] += amount
d[to_idx + 1] -= amount
}
// Find max/min for the WHOLE array after add/subtract
current = max = min = d[0];
for (i = 1 to N - 1) {
current += d[i]; // Sum from d[0] to d[i] is a[i]
max = MAX(max, current);
min = MIN(min, current);
}
Generally there is no "best way" to find the min/max in the performance point of view because it depends on how this application will be used.
-Finding the max and min in a list needs O(n) Time, so if you want to run many (many in the context of the input) operations, your approach to find the min/max after all the operations took place is fine.
-But if the list will hold many elements and you don’t want to run that many operations, you better check each result of the op if its a new max/min and update if necessary.