Time complexity of an iterative algorithm - c++

I am trying to find the Time Complexity of this algorithm.
The iterative: algorithm produces all the bit-strings within a given Hamming distance, from the input bit-string. It generates all increasing sequences 0 <= a[0] < ... < a[dist-1] < strlen(num), and reverts bits at corresponding indices.
The vector a is supposed to keep indices for which bits have to be inverted. So if a contains the current index i, we print 1 instead of 0 and vice versa. Otherwise we print the bit as is (see else-part), as shown below:
// e.g. hamming("0000", 2);
void hamming(const char* num, size_t dist) {
assert(dist > 0);
vector<int> a(dist);
size_t k = 0, n = strlen(num);
a[k] = -1;
while (true)
if (++a[k] >= n)
if (k == 0)
return;
else {
--k;
continue;
}
else
if (k == dist - 1) {
// this is an O(n) operation and will be called
// (n choose dist) times, in total.
print(num, a);
}
else {
a[k+1] = a[k];
++k;
}
}
What is the Time Complexity of this algorithm?
My attempt says:
dist * n + (n choose t) * n + 2
but this seems not to be true, consider the following examples, all with dist = 2:
len = 3, (3 choose 2) = 3 * O(n), 10 while iterations
len = 4, (4 choose 2) = 6 * O(n), 15 while iterations
len = 5, (5 choose 2) = 9 * O(n), 21 while iterations
len = 6, (6 choose 2) = 15 * O(n), 28 while iterations
Here are two representative runs (with the print to be happening at the start of the loop):
000, len = 3
k = 0, total_iter = 1
vector a = -1 0
k = 1, total_iter = 2
vector a = 0 0
Paid O(n)
k = 1, total_iter = 3
vector a = 0 1
Paid O(n)
k = 1, total_iter = 4
vector a = 0 2
k = 0, total_iter = 5
vector a = 0 3
k = 1, total_iter = 6
vector a = 1 1
Paid O(n)
k = 1, total_iter = 7
vector a = 1 2
k = 0, total_iter = 8
vector a = 1 3
k = 1, total_iter = 9
vector a = 2 2
k = 0, total_iter = 10
vector a = 2 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gsamaras#pythagoras:~/Desktop/generate_bitStrings_HammDistanceT$ ./iter
0000, len = 4
k = 0, total_iter = 1
vector a = -1 0
k = 1, total_iter = 2
vector a = 0 0
Paid O(n)
k = 1, total_iter = 3
vector a = 0 1
Paid O(n)
k = 1, total_iter = 4
vector a = 0 2
Paid O(n)
k = 1, total_iter = 5
vector a = 0 3
k = 0, total_iter = 6
vector a = 0 4
k = 1, total_iter = 7
vector a = 1 1
Paid O(n)
k = 1, total_iter = 8
vector a = 1 2
Paid O(n)
k = 1, total_iter = 9
vector a = 1 3
k = 0, total_iter = 10
vector a = 1 4
k = 1, total_iter = 11
vector a = 2 2
Paid O(n)
k = 1, total_iter = 12
vector a = 2 3
k = 0, total_iter = 13
vector a = 2 4
k = 1, total_iter = 14
vector a = 3 3
k = 0, total_iter = 15
vector a = 3 4

The while loop is somewhat clever and subtle, and it's arguable that it's doing two different things (or even three if you count the initialisation of a). That's what's making your complexity calculations challenging, and it's also less efficient than it could be.
In the abstract, to incrementally compute the next set of indices from the current one, the idea is to find the last index, i, that's less than n-dist+i, increment it, and set the following indexes to a[i]+1, a[i]+2, and so on.
For example, if dist=5, n=11 and your indexes are:
0, 3, 5, 9, 10
Then 5 is the last value less than n-dist+i (because n-dist is 6, and 10=6+4, 9=6+3, but 5<6+2).
So we increment 5, and set the subsequent integers to get the set of indexes:
0, 3, 6, 7, 8
Now consider how your code runs, assuming k=4
0, 3, 5, 9, 10
a[k] + 1 is 11, so k becomes 3.
++a[k] is 10, so a[k+1] becomes 10, and k becomes 4.
++a[k] is 11, so k becomes 3.
++a[k] is 11, so k becomes 2.
++a[k] is 6, so a[k+1] becomes 6, and k becomes 3.
++a[k] is 7, so a[k+1] becomes 7, and k becomes 4.
++a[k] is 8, and we continue to call the print function.
This code is correct, but it's not efficient because k scuttles backwards and forwards as it's searching for the highest index that can be incremented without causing an overflow in the higher indices. In fact, if the highest index is j from the end, the code uses a non-linear number iterations of the while loop. You can easily demonstrate this yourself if you trace how many iterations of the while loop occur when n==dist for different values of n. There is exactly one line of output, but you'll see an O(2^n) growth in the number of iterations (in fact, you'll see 2^(n+1)-2 iterations).
This scuttling makes your code needlessly inefficient, and also hard to analyse.
Instead, you can write the code in a more direct way:
void hamming2(const char* num, size_t dist) {
int a[dist];
for (int i = 0; i < dist; i++) {
a[i] = i;
}
size_t n = strlen(num);
while (true) {
print(num, a);
int i;
for (i = dist - 1; i >= 0; i--) {
if (a[i] < n - dist + i) break;
}
if (i < 0) return;
a[i]++;
for (int j = i+1; j<dist; j++) a[j] = a[i] + j - i;
}
}
Now, each time through the while loop produces a new set of indexes. The exact cost per iteration is not straightforward, but since print is O(n), and the remaining code in the while loop is at worst O(dist), the overall cost is O(N_INCR_SEQ(n, dist) * n), where N_INCR_SEQ(n, dist) is the number of increasing sequences of natural numbers < n of length dist. Someone in the comments provides a link that gives a formula for this.

Notice, that given n which represents the length, and t which represents the distance required, the number of increasing, non-negative series of t integers between 1 and n (or in indices form, between 0 and n-1) is indeed n choose t, since we pick t distinct indices.
The problem occurs with your generation of those series:
-First, notice that for example in the case of length 4, you actually go over 5 different indices, 0 to 4.
-Secondly, notice that you are taking in account series with identical indices (in the case of t=2, its 0 0, 1 1, 2 2 and so on), and generally, you would go through every non-decreasing series, instead of through every increasing series.
So for calculating the TC of your program, make sure you take that into account.
Hint: try to make one-to-one correspondence from the universe of those series, to the universe of integer solutions to some equation.
If you need the direct solution, take a look here :
https://math.stackexchange.com/questions/432496/number-of-non-decreasing-sequences-of-length-m
The final solution is (n+t-1) choose (t), but noticing the first bullet, in your program, its actually ((n+1)+t-1) choose (t), since you loop with one extra index.
Denote
((n+1)+t-1) choose (t) =: A , n choose t =: B
overall we get O(1) + B*O(n) + (A-B)*O(1)

Related

how to calculate multiset of elements given probability on each element?

let say I have a total number
tN = 12
and a set of elements
elem = [1,2,3,4]
and a prob for each element to be taken
prob = [0.0, 0.5, 0.75, 0.25]
i need to get a random multiset of these elements, such as
the taken elements reflects the prob
the sum of each elem is tN
with the example above, here's some possible outcome:
3 3 2 4
2 3 2 3 2
3 4 2 3
2 2 3 3 2
3 2 3 2 2
at the moment, maxtN will be 64, and elements the one above (1,2,3,4).
is this a Knapsack problem? how would you easily resolve it? both "on the fly" or "pre-calculate" approch will be allowed (or at least, depends by the computation time). I'm doing it for a c++ app.
Mission: don't need to have exactly the % in the final seq. Just to give more possibility to an elements to be in the final seq due to its higher prob. In few words: in the example, i prefer get seq with more 3-2 rather than 4, and no 1.
Here's an attempt to select elements with its prob, on 10 takes:
Randomizer randomizer;
int tN = 12;
std::vector<int> elem = {2, 3, 4};
std::vector<float> prob = {0.5f, 0.75f, 0.25f};
float probSum = std::accumulate(begin(prob), end(prob), 0.0f, std::plus<float>());
std::vector<float> probScaled;
for (size_t i = 0; i < prob.size(); i++)
{
probScaled.push_back((i == 0 ? 0.0f : probScaled[i - 1]) + (prob[i] / probSum));
}
for (size_t r = 0; r < 10; r++)
{
float rnd = randomizer.getRandomValue();
int index = 0;
for (size_t i = 0; i < probScaled.size(); i++)
{
if (rnd < probScaled[i])
{
index = i;
break;
}
}
std::cout << elem[index] << std::endl;
}
which gives, for example, this choice:
3
3
2
2
4
2
2
4
3
3
Now i just need to build a multiset which sum up to tN. Any tips?

find minimum sum of non-neighbouring K entries inside an array

Given an integer array A of size N, find minimum sum of K non-neighboring entries (entries cant be adjacent to one another, for example, if K was 2, you cant add A[2], A[3] and call it minimum sum, even if it was, because those are adjacent/neighboring to one another), example:
A[] = {355, 46, 203, 140, 28}, k = 2, result would be 74 (46 + 28)
A[] = {9, 4, 0, 9, 14, 7, 1}, k = 3, result would be 10 (9 + 0 + 1)
The problem is somewhat similar to House Robber on leetcode, except instead of finding maximum sum of non-adjacent entries, we are tasked to find the minimum sum and with constraint K entries.
From my prespective, this is clearly a dynamic programming problem, so i tried to break down the problem recursively and implemented something like this:
#include <vector>
#include <iostream>
using namespace std;
int minimal_k(vector<int>& nums, int i, int k)
{
if (i == 0) return nums[0];
if (i < 0 || !k) return 0;
return min(minimal_k(nums, i - 2, k - 1) + nums[i], minimal_k(nums, i - 1, k));
}
int main()
{
// example above
vector<int> nums{9, 4, 0, 9, 14, 7, 1};
cout << minimal_k(nums, nums.size() - 1, 3);
// output is 4, wrong answer
}
This was my attempt at the solution, I have played around a lot with this but no luck, so what would be a solution to this problem?
This line:
if (i < 0 || !k) return 0;
If k is 0, you should probably return return 0. But if i < 0 or if the effective length of the array is less than k, you probably need to return a VERY LARGE value such that the summed result goes higher than any valid solution.
In my solution, I have the recursion return INT_MAX as a long long when recursing into an invalid subset or when k exceeds the remaining length.
And as with any of these dynamic programming and recursion problems, a cache of results so that you don't repeat the same recursive search will help out a bunch. This will speed things up by several orders of magnitude for very large input sets.
Here's my solution.
#include <iostream>
#include <vector>
#include <unordered_map>
#include <algorithm>
using namespace std;
// the "cache" is a map from offset to another map
// that tracks k to a final result.
typedef unordered_map<size_t, unordered_map<size_t, long long>> CACHE_MAP;
bool get_cache_result(const CACHE_MAP& cache, size_t offset, size_t k, long long& result);
void insert_into_cache(CACHE_MAP& cache, size_t offset, size_t k, long long result);
long long minimal_k_impl(const vector<int>& nums, size_t offset, size_t k, CACHE_MAP& cache)
{
long long result = INT_MAX;
size_t len = nums.size();
if (k == 0)
{
return 0;
}
if (offset >= len)
{
return INT_MAX; // exceeded array boundary, return INT_MAX
}
size_t effective_length = len - offset;
// If we have more k than remaining elements, return INT_MAX to indicate
// that this recursion is invalid
// you might be able to reduce to checking (effective_length/2+1 < k)
if ( (effective_length < k) || ((effective_length == k) && (k != 1)) )
{
return INT_MAX;
}
if (get_cache_result(cache, offset, k, result))
{
return result;
}
long long sum1 = nums[offset] + minimal_k_impl(nums, offset + 2, k - 1, cache);
long long sum2 = minimal_k_impl(nums, offset + 1, k, cache);
result = std::min(sum1, sum2);
insert_into_cache(cache, offset, k, result);
return result;
}
long long minimal_k(const vector<int>& nums, size_t k)
{
CACHE_MAP cache;
return minimal_k_impl(nums, 0, k, cache);
}
bool get_cache_result(const CACHE_MAP& cache, size_t offset, size_t k, long long& result)
{
// effectively this code does this:
// result = cache[offset][k]
bool ret = false;
auto itor1 = cache.find(offset);
if (itor1 != cache.end())
{
auto& inner_map = itor1->second;
auto itor2 = inner_map.find(k);
if (itor2 != inner_map.end())
{
ret = true;
result = itor2->second;
}
}
return ret;
}
void insert_into_cache(CACHE_MAP& cache, size_t offset, size_t k, long long result)
{
cache[offset][k] = result;
}
int main()
{
vector<int> nums1{ 355, 46, 203, 140, 28 };
vector<int> nums2{ 9, 4, 0, 9, 14, 7, 1 };
vector<int> nums3{8,6,7,5,3,0,9,5,5,5,1,2,9,-10};
long long result = minimal_k(nums1, 2);
std::cout << result << std::endl;
result = minimal_k(nums2, 3);
std::cout << result << std::endl;
result = minimal_k(nums3, 3);
std::cout << result << std::endl;
return 0;
}
It is core sorting related problem. To find sum of minimum k non adjacent elements requires minimum value elements to bring next to each other by sorting. Let's see this sorting approach,
Given input array = [9, 4, 0, 9, 14, 7, 1] and k = 3
Create another array which contains elements of input array with indexes as showed below,
[9, 0], [4, 1], [0, 2], [9, 3], [14, 4], [7, 5], [1, 6]
then sort this array.
Motive behind this element and index array is, after sorting information of index of each element will not be lost.
One more array is required to keep record of used indexes, so initial view of information after sorting is as showed below,
Element and Index array
..............................
| 0 | 1 | 4 | 7 | 9 | 9 | 14 |
..............................
2 6 1 5 3 0 4 <-- Index
Used index record array
..............................
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
..............................
0 1 2 3 4 5 6 <-- Index
In used index record array 0 (false) means element at this index is not included yet in minimum sum.
Front element of sorted array is minimum value element and we include it for minimum sum and update used index record array to indicate that this element is used, as showed below,
font element is 0 at index 2 and due to this set 1(true) at index 2 of used index record array showed below,
min sum = 0
Used index record array
..............................
| 0 | 0 | 1 | 0 | 0 | 0 | 0 |
..............................
0 1 2 3 4 5 6
iterate to next element in sorted array and as you can see above it is 1 and have index 6. To include 1 in minimum sum we have to find, is left or right adjacent element of 1 already used or not, so 1 has index 6 and it is last element in input array it means we only have to check if value of index 5 is already used or not, and this can be done by looking at used index record array, and as showed above usedIndexRerocd[5] = 0 so 1 can be considered for minimum sum. After using 1, state updated to following,
min sum = 0 + 1
Used index record array
..............................
| 0 | 0 | 1 | 0 | 0 | 0 | 1 |
..............................
0 1 2 3 4 5 6
than iterate to next element which is 4 at index 1 but this can not be considered because element at index 0 is already used, same happen with elements 7, 9 because these are at index 5, 3 respectively and adjacent to used elements.
Finally iterating to 9 at index = 0 and by looking at used index record array usedIndexRecordArray[1] = 0 and that's why 9 can be included in minimum sum and final state reached to following,
min sum = 0 + 1 + 9
Used index record array
..............................
| 1 | 0 | 1 | 0 | 0 | 0 | 1 |
..............................
0 1 2 3 4 5 6
Finally minimum sum = 10,
One of the Worst case scenario when input array is already sorted then at least 2*k - 1 elements have to be iterated to find minimum sum of non adjacent k elements as showed below
input array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and k = 4 then following highlighted elements shall be considered for minimum sum,
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Note: You have to include all input validation, like one of the validation is, if you want to find minimum sum of k non adjacent elements then input should have at least 2*k - 1 elements. I am not including these validations because i am aware of all input constraints of problem.
#include <iostream>
#include <vector>
#include <algorithm>
using std::cout;
long minSumOfNonAdjacentKEntries(std::size_t k, const std::vector<int>& arr){
if(arr.size() < 2){
return 0;
}
std::vector<std::pair<int, std::size_t>> numIndexArr;
numIndexArr.reserve(arr.size());
for(std::size_t i = 0, arrSize = arr.size(); i < arrSize; ++i){
numIndexArr.emplace_back(arr[i], i);
}
std::sort(numIndexArr.begin(), numIndexArr.end(), [](const std::pair<int, std::size_t>& a,
const std::pair<int, std::size_t>& b){return a.first < b.first;});
long minSum = numIndexArr.front().first;
std::size_t elementCount = 1;
std::size_t lastIndex = arr.size() - 1;
std::vector<bool> usedIndexRecord(arr.size(), false);
usedIndexRecord[numIndexArr.front().second] = true;
for(std::vector<std::pair<int, std::size_t>>::const_iterator it = numIndexArr.cbegin() + 1,
endIt = numIndexArr.cend(); elementCount < k && endIt != it; ++it){
bool leftAdjacentElementUsed = (0 == it->second) ? false : usedIndexRecord[it->second - 1];
bool rightAdjacentElementUsed = (lastIndex == it->second) ? false : usedIndexRecord[it->second + 1];
if(!leftAdjacentElementUsed && !rightAdjacentElementUsed){
minSum += it->first;
++elementCount;
usedIndexRecord[it->second] = true;
}
}
return minSum;
}
int main(){
cout<< "k = 2, [355, 46, 203, 140, 28], min sum = "<< minSumOfNonAdjacentKEntries(2, {355, 46, 203, 140, 28})
<< '\n';
cout<< "k = 3, [9, 4, 0, 9, 14, 7, 1], min sum = "<< minSumOfNonAdjacentKEntries(3, {9, 4, 0, 9, 14, 7, 1})
<< '\n';
}
Output:
k = 2, [355, 46, 203, 140, 28], min sum = 74
k = 3, [9, 4, 0, 9, 14, 7, 1], min sum = 10

how can we find the nth 3 word combination from a word corpus of 3000 words

I have a word corpus of say 3000 words such as [hello, who, this ..].
I want to find the nth 3 word combination from this corpus.I am fine with any order as long as the algorithm gives consistent output.
What would be the time complexity of the algorithm.
I have seen this answer but was looking for something simple.
(Note that I will be using 1-based indexes and ranks throughout this answer.)
To generate all combinations of 3 elements from a list of n elements, we'd take all elements from 1 to n-2 as the first element, then for each of these we'd take all elements after the first element up to n-1 as the second element, then for each of these we'd take all elements after the second element up to n as the third element. This gives us a fixed order, and a direct relation between the rank and a specific combination.
If we take element i as the first element, there are (n-i choose 2) possibilities for the second and third element, and thus (n-i choose 2) combinations with i as the first element. If we then take element j as the second element, there are (n-j choose 1) = n-j possibilities for the third element, and thus n-j combinations with i and j as the first two elements.
Linear search in tables of binomial coefficients
With tables of these binomial coefficients, we can quickly find a specific combination, given its rank. Let's look at a simplified example with a list of 10 elements; these are the number of combinations with element i as the first element:
i
1 C(9,2) = 36
2 C(8,2) = 28
3 C(7,2) = 21
4 C(6,2) = 15
5 C(5,2) = 10
6 C(4,2) = 6
7 C(3,2) = 3
8 C(2,2) = 1
---
120 = C(10,3)
And these are the number of combinations with element j as the second element:
j
2 C(8,1) = 8
3 C(7,1) = 7
4 C(6,1) = 6
5 C(5,1) = 5
6 C(4,1) = 4
7 C(3,1) = 3
8 C(2,1) = 2
9 C(1,1) = 1
So if we're looking for the combination with e.g. rank 96, we look at the number of combinations for each choice of first element i, until we find which group of combinations the combination ranked 96 is in:
i
1 36 96 > 36 96 - 36 = 60
2 28 60 > 28 60 - 28 = 32
3 21 32 > 21 32 - 21 = 11
4 15 11 <= 15
So we know that the first element i is 4, and that within the 15 combinations with i=4, we're looking for the eleventh combination. Now we look at the number of combinations for each choice of second element j, starting after 4:
j
5 5 11 > 5 11 - 5 = 6
6 4 6 > 4 6 - 4 = 2
7 3 2 <= 3
So we know that the second element j is 7, and that the third element is the second combination with j=7, which is k=9. So the combination with rank 96 contains the elements 4, 7 and 9.
Binary search in tables of running total of binomial coefficients
Instead of creating a table of the binomial coefficients and then performing a linear search, it is of course more efficient to create a table of the running total of the binomial coefficient, and then perform a binary search on it. This will improve the time complexity from O(N) to O(logN); in the case of N=3000, the two look-ups can be done in log2(3000) = 12 steps.
So we'd store:
i
1 36
2 64
3 85
4 100
5 110
6 116
7 119
8 120
and:
j
2 8
3 15
4 21
5 26
6 30
7 33
8 35
9 36
Note that when finding j in the second table, you have to subtract the sum corresponding with i from the sums. Let's walk through the example of rank 96 and combination [4,7,9] again; we find the first value that is greater than or equal to the rank:
3 85 96 > 85
4 100 96 <= 100
So we know that i=4; we then subtract the previous sum next to i-1, to get:
96 - 85 = 11
Now we look at the table for j, but we start after j=4, and subtract the sum corresponding to 4, which is 21, from the sums. then again, we find the first value that is greater than or equal to the rank we're looking for (which is now 11):
6 30 - 21 = 9 11 > 9
7 33 - 21 = 12 11 <= 12
So we know that j=7; we subtract the previous sum corresponding to j-1, to get:
11 - 9 = 2
So we know that the second element j is 7, and that the third element is the second combination with j=7, which is k=9. So the combination with rank 96 contains the elements 4, 7 and 9.
Hard-coding the look-up tables
It is of course unnecessary to generate these look-up tables again every time we want to perform a look-up. We only need to generate them once, and then hard-code them into the rank-to-combination algorithm; this should take only 2998 * 64-bit + 2998 * 32-bit = 35kB of space, and make the algorithm incredibly fast.
Inverse algorithm
The inverse algorithm, to find the rank given a combination of elements [i,j,k] then means:
Finding the index of the elements in the list; if the list is sorted (e.g. words sorted alphabetically) this can be done with a binary search in O(logN).
Find the sum in the table for i that corresponds with i-1.
Add to that the sum in the table for j that corresponds with j-1, minus the sum that corresponds with i.
Add to that k-j.
Let's look again at the same example with the combination of elements [4,7,9]:
i=4 -> table_i[3] = 85
j=7 -> table_j[6] - table_j[4] = 30 - 21 = 9
k=9 -> k-j = 2
rank = 85 + 9 + 2 = 96
Look-up tables for N=3000
This snippet generates the look-up table with the running total of the binomial coefficients for i = 1 to 2998:
function C(n, k) { // binomial coefficient (Pascal's triangle)
if (k < 0 || k > n) return 0;
if (k > n - k) k = n - k;
if (! C.t) C.t = [[1]];
while (C.t.length <= n) {
C.t.push([1]);
var l = C.t.length - 1;
for (var i = 1; i < l / 2; i++)
C.t[l].push(C.t[l - 1][i - 1] + C.t[l - 1][i]);
if (l % 2 == 0)
C.t[l].push(2 * C.t[l - 1][(l - 2) / 2]);
}
return C.t[n][k];
}
for (var total = 0, x = 2999; x > 1; x--) {
total += C(x, 2);
document.write(total + ", ");
}
This snippet generates the look-up table with the running total of the binomial coefficients for j = 2 to 2999:
for (var total = 0, x = 2998; x > 0; x--) {
total += x;
document.write(total + ", ");
}
Code example
Here's a quick code example, unfortunately without the full hardcoded look-up tables, because of the size restriction on answers on SO. Run the snippets above and paste the results into the arrays iTable and jTable (after the leading zeros) to get the faster version with hard-coded look-up tables.
function combinationToRank(i, j, k) {
return iTable[i - 1] + jTable[j - 1] - jTable[i] + k - j;
}
function rankToCombination(rank) {
var i = binarySearch(iTable, rank, 1);
rank -= iTable[i - 1];
rank += jTable[i];
var j = binarySearch(jTable, rank, i + 1);
rank -= jTable[j - 1];
var k = j + rank;
return [i, j, k];
function binarySearch(array, value, first) {
var last = array.length - 1;
while (first < last - 1) {
var middle = Math.floor((last + first) / 2);
if (value > array[middle]) first = middle;
else last = middle;
}
return (value <= array[first]) ? first : last;
}
}
var iTable = [0]; // append look-up table values here
var jTable = [0, 0]; // and here
// remove this part when using hard-coded look-up tables
function C(n,k){if(k<0||k>n)return 0;if(k>n-k)k=n-k;if(!C.t)C.t=[[1]];while(C.t.length<=n){C.t.push([1]);var l=C.t.length-1;for(var i=1;i<l/2;i++)C.t[l].push(C.t[l-1][i-1]+C.t[l-1][i]);if(l%2==0)C.t[l].push(2*C.t[l-1][(l-2)/2])}return C.t[n][k]}
for (var iTotal = 0, jTotal = 0, x = 2999; x > 1; x--) {
iTable.push(iTotal += C(x, 2));
jTable.push(jTotal += x - 1);
}
document.write(combinationToRank(500, 1500, 2500) + "<br>");
document.write(rankToCombination(1893333750) + "<br>");

All possible combinations of coins

I need to write a program which displays all possible change combinations given an array of denominations [1 , 2, 5, 10, 20, 50, 100, 200] // 1 = 1 cent
Value to make the change from = 300
I'm basing my code on the solution from this site http://www.geeksforgeeks.org/dynamic-programming-set-7-coin-change/
#include<stdio.h>
int count( int S[], int m, int n )
{
int i, j, x, y;
// We need n+1 rows as the table is consturcted in bottom up manner using
// the base case 0 value case (n = 0)
int table[n+1][m];
// Fill the enteries for 0 value case (n = 0)
for (i=0; i<m; i++)
table[0][i] = 1;
// Fill rest of the table enteries in bottom up manner
for (i = 1; i < n+1; i++)
{
for (j = 0; j < m; j++)
{
// Count of solutions including S[j]
x = (i-S[j] >= 0)? table[i - S[j]][j]: 0;
// Count of solutions excluding S[j]
y = (j >= 1)? table[i][j-1]: 0;
// total count
table[i][j] = x + y;
}
}
return table[n][m-1];
}
// Driver program to test above function
int main()
{
int arr[] = {1, 2, 5, 10, 20, 50, 100, 200}; //coins array
int m = sizeof(arr)/sizeof(arr[0]);
int n = 300; //value to make change from
printf(" %d ", count(arr, m, n));
return 0;
}
The program runs fine. It displays the number of all possible combinations, but I need it to be more advanced. The way I need it to work is to display the result in following fashion:
1 cent: n number of possible combinations.
2 cents:
5 cents:
and so on...
How can I modify the code to achieve that ?
Greedy Algorithm Approach
Have this denominations in an int array say, int den[] = [1 , 2, 5, 10, 20, 50, 100, 200]
Iterate over this array
For each iteration do the following
Take the element in the denominations array
Divide the change to be allotted number by the element in denominations array number
If the change allotted number is perfectly divisible by the number in denomination array then you are done with the change for that number.
If the number is not perfectly divisible then check for the remainder and do the same iteration with other number
Exit the inner iteration once you get the value equal to the change number
Do the same for the next denomination available in our denomination array.
Explained with example
den = [1 , 2, 5, 10, 20, 50, 100, 200]
Change to be alloted : 270, let take this as x
and y be the temporary variable
Change map z[coin denomination, count of coins]
int y, z[];
First iteration :
den = 1
x = 270
y = 270/1;
if x is equal to y*den
then z[den, y] // z[1, 270]
Iteration completed
Second Iteration:
den = 2
x = 270
y = 270/2;
if x is equal to y*den
then z[den , y] // [2, 135]
Iteration completed
Lets take a odd number
x = 217 and den = 20
y= 217/20;
now x is not equal to y*den
then update z[den, y] // [20, 10]
find new x = x - den*y = 17
x=17 and identify the next change value by greedy it would be 10
den = 10
y = 17/10
now x is not equal to y*den
then update z[den, y] // [10, 1]
find new x = x - den*y = 7
then do the same and your map would be having following entries
[20, 10]
[10, 1]
[5, 1]
[2, 1]

codility MaxDistanceMonotonic, what's wrong with my solution

Question:
A non-empty zero-indexed array A consisting of N integers is given.
A monotonic pair is a pair of integers (P, Q), such that 0 ≤ P ≤ Q < N and A[P] ≤ A[Q].
The goal is to find the monotonic pair whose indices are the furthest apart. More precisely, we should maximize the value Q − P. It is sufficient to find only the distance.
For example, consider array A such that:
A[0] = 5
A[1] = 3
A[2] = 6
A[3] = 3
A[4] = 4
A[5] = 2
There are eleven monotonic pairs: (0,0), (0, 2), (1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (3, 3), (3, 4), (4, 4), (5, 5). The biggest distance is 3, in the pair (1, 4).
Write a function:
int solution(vector &A);
that, given a non-empty zero-indexed array A of N integers, returns the biggest distance within any of the monotonic pairs.
For example, given:
A[0] = 5
A[1] = 3
A[2] = 6
A[3] = 3
A[4] = 4
A[5] = 2
the function should return 3, as explained above.
Assume that:
N is an integer within the range [1..300,000];
each element of array A is an integer within the range [−1,000,000,000..1,000,000,000].
Complexity:
expected worst-case time complexity is O(N);
expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments).
Elements of input arrays can be modified.
Here is my solution of MaxDistanceMonotonic:
int solution(vector<int> &A) {
long int result;
long int max = A.size() - 1;
long int min = 0;
while(A.at(max) < A.at(min)){
max--;
min++;
}
result = max - min;
while(max < (long int)A.size()){
while(min >= 0){
if(A.at(max) >= A.at(min) && max - min > result){
result = max - min;
}
min--;
}
max++;
}
return result;
}
And my result is like this, what's wrong with my answer for the last test:
If you have:
0 1 2 3 4 5
31 2 10 11 12 30
Your algorithm outputs 3, but the correct answer is 4 = 5 - 1.
This happens because your min goes to -1 on the first full run of the inner while loop, so the pair (1, 5) will never have the chance to get checked, max starting out at 4 when entering the nested whiles.
Note that the problem description expects O(n) extra storage, while you use O(1). I don't think it's possible to solve the problem with O(1) extra storage and O(n) time.
I suggest you rethink your approach. If you give up, there is an official solution here.