find minimum sum of non-neighbouring K entries inside an array - c++

Given an integer array A of size N, find minimum sum of K non-neighboring entries (entries cant be adjacent to one another, for example, if K was 2, you cant add A[2], A[3] and call it minimum sum, even if it was, because those are adjacent/neighboring to one another), example:
A[] = {355, 46, 203, 140, 28}, k = 2, result would be 74 (46 + 28)
A[] = {9, 4, 0, 9, 14, 7, 1}, k = 3, result would be 10 (9 + 0 + 1)
The problem is somewhat similar to House Robber on leetcode, except instead of finding maximum sum of non-adjacent entries, we are tasked to find the minimum sum and with constraint K entries.
From my prespective, this is clearly a dynamic programming problem, so i tried to break down the problem recursively and implemented something like this:
#include <vector>
#include <iostream>
using namespace std;
int minimal_k(vector<int>& nums, int i, int k)
{
if (i == 0) return nums[0];
if (i < 0 || !k) return 0;
return min(minimal_k(nums, i - 2, k - 1) + nums[i], minimal_k(nums, i - 1, k));
}
int main()
{
// example above
vector<int> nums{9, 4, 0, 9, 14, 7, 1};
cout << minimal_k(nums, nums.size() - 1, 3);
// output is 4, wrong answer
}
This was my attempt at the solution, I have played around a lot with this but no luck, so what would be a solution to this problem?

This line:
if (i < 0 || !k) return 0;
If k is 0, you should probably return return 0. But if i < 0 or if the effective length of the array is less than k, you probably need to return a VERY LARGE value such that the summed result goes higher than any valid solution.
In my solution, I have the recursion return INT_MAX as a long long when recursing into an invalid subset or when k exceeds the remaining length.
And as with any of these dynamic programming and recursion problems, a cache of results so that you don't repeat the same recursive search will help out a bunch. This will speed things up by several orders of magnitude for very large input sets.
Here's my solution.
#include <iostream>
#include <vector>
#include <unordered_map>
#include <algorithm>
using namespace std;
// the "cache" is a map from offset to another map
// that tracks k to a final result.
typedef unordered_map<size_t, unordered_map<size_t, long long>> CACHE_MAP;
bool get_cache_result(const CACHE_MAP& cache, size_t offset, size_t k, long long& result);
void insert_into_cache(CACHE_MAP& cache, size_t offset, size_t k, long long result);
long long minimal_k_impl(const vector<int>& nums, size_t offset, size_t k, CACHE_MAP& cache)
{
long long result = INT_MAX;
size_t len = nums.size();
if (k == 0)
{
return 0;
}
if (offset >= len)
{
return INT_MAX; // exceeded array boundary, return INT_MAX
}
size_t effective_length = len - offset;
// If we have more k than remaining elements, return INT_MAX to indicate
// that this recursion is invalid
// you might be able to reduce to checking (effective_length/2+1 < k)
if ( (effective_length < k) || ((effective_length == k) && (k != 1)) )
{
return INT_MAX;
}
if (get_cache_result(cache, offset, k, result))
{
return result;
}
long long sum1 = nums[offset] + minimal_k_impl(nums, offset + 2, k - 1, cache);
long long sum2 = minimal_k_impl(nums, offset + 1, k, cache);
result = std::min(sum1, sum2);
insert_into_cache(cache, offset, k, result);
return result;
}
long long minimal_k(const vector<int>& nums, size_t k)
{
CACHE_MAP cache;
return minimal_k_impl(nums, 0, k, cache);
}
bool get_cache_result(const CACHE_MAP& cache, size_t offset, size_t k, long long& result)
{
// effectively this code does this:
// result = cache[offset][k]
bool ret = false;
auto itor1 = cache.find(offset);
if (itor1 != cache.end())
{
auto& inner_map = itor1->second;
auto itor2 = inner_map.find(k);
if (itor2 != inner_map.end())
{
ret = true;
result = itor2->second;
}
}
return ret;
}
void insert_into_cache(CACHE_MAP& cache, size_t offset, size_t k, long long result)
{
cache[offset][k] = result;
}
int main()
{
vector<int> nums1{ 355, 46, 203, 140, 28 };
vector<int> nums2{ 9, 4, 0, 9, 14, 7, 1 };
vector<int> nums3{8,6,7,5,3,0,9,5,5,5,1,2,9,-10};
long long result = minimal_k(nums1, 2);
std::cout << result << std::endl;
result = minimal_k(nums2, 3);
std::cout << result << std::endl;
result = minimal_k(nums3, 3);
std::cout << result << std::endl;
return 0;
}

It is core sorting related problem. To find sum of minimum k non adjacent elements requires minimum value elements to bring next to each other by sorting. Let's see this sorting approach,
Given input array = [9, 4, 0, 9, 14, 7, 1] and k = 3
Create another array which contains elements of input array with indexes as showed below,
[9, 0], [4, 1], [0, 2], [9, 3], [14, 4], [7, 5], [1, 6]
then sort this array.
Motive behind this element and index array is, after sorting information of index of each element will not be lost.
One more array is required to keep record of used indexes, so initial view of information after sorting is as showed below,
Element and Index array
..............................
| 0 | 1 | 4 | 7 | 9 | 9 | 14 |
..............................
2 6 1 5 3 0 4 <-- Index
Used index record array
..............................
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
..............................
0 1 2 3 4 5 6 <-- Index
In used index record array 0 (false) means element at this index is not included yet in minimum sum.
Front element of sorted array is minimum value element and we include it for minimum sum and update used index record array to indicate that this element is used, as showed below,
font element is 0 at index 2 and due to this set 1(true) at index 2 of used index record array showed below,
min sum = 0
Used index record array
..............................
| 0 | 0 | 1 | 0 | 0 | 0 | 0 |
..............................
0 1 2 3 4 5 6
iterate to next element in sorted array and as you can see above it is 1 and have index 6. To include 1 in minimum sum we have to find, is left or right adjacent element of 1 already used or not, so 1 has index 6 and it is last element in input array it means we only have to check if value of index 5 is already used or not, and this can be done by looking at used index record array, and as showed above usedIndexRerocd[5] = 0 so 1 can be considered for minimum sum. After using 1, state updated to following,
min sum = 0 + 1
Used index record array
..............................
| 0 | 0 | 1 | 0 | 0 | 0 | 1 |
..............................
0 1 2 3 4 5 6
than iterate to next element which is 4 at index 1 but this can not be considered because element at index 0 is already used, same happen with elements 7, 9 because these are at index 5, 3 respectively and adjacent to used elements.
Finally iterating to 9 at index = 0 and by looking at used index record array usedIndexRecordArray[1] = 0 and that's why 9 can be included in minimum sum and final state reached to following,
min sum = 0 + 1 + 9
Used index record array
..............................
| 1 | 0 | 1 | 0 | 0 | 0 | 1 |
..............................
0 1 2 3 4 5 6
Finally minimum sum = 10,
One of the Worst case scenario when input array is already sorted then at least 2*k - 1 elements have to be iterated to find minimum sum of non adjacent k elements as showed below
input array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and k = 4 then following highlighted elements shall be considered for minimum sum,
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Note: You have to include all input validation, like one of the validation is, if you want to find minimum sum of k non adjacent elements then input should have at least 2*k - 1 elements. I am not including these validations because i am aware of all input constraints of problem.
#include <iostream>
#include <vector>
#include <algorithm>
using std::cout;
long minSumOfNonAdjacentKEntries(std::size_t k, const std::vector<int>& arr){
if(arr.size() < 2){
return 0;
}
std::vector<std::pair<int, std::size_t>> numIndexArr;
numIndexArr.reserve(arr.size());
for(std::size_t i = 0, arrSize = arr.size(); i < arrSize; ++i){
numIndexArr.emplace_back(arr[i], i);
}
std::sort(numIndexArr.begin(), numIndexArr.end(), [](const std::pair<int, std::size_t>& a,
const std::pair<int, std::size_t>& b){return a.first < b.first;});
long minSum = numIndexArr.front().first;
std::size_t elementCount = 1;
std::size_t lastIndex = arr.size() - 1;
std::vector<bool> usedIndexRecord(arr.size(), false);
usedIndexRecord[numIndexArr.front().second] = true;
for(std::vector<std::pair<int, std::size_t>>::const_iterator it = numIndexArr.cbegin() + 1,
endIt = numIndexArr.cend(); elementCount < k && endIt != it; ++it){
bool leftAdjacentElementUsed = (0 == it->second) ? false : usedIndexRecord[it->second - 1];
bool rightAdjacentElementUsed = (lastIndex == it->second) ? false : usedIndexRecord[it->second + 1];
if(!leftAdjacentElementUsed && !rightAdjacentElementUsed){
minSum += it->first;
++elementCount;
usedIndexRecord[it->second] = true;
}
}
return minSum;
}
int main(){
cout<< "k = 2, [355, 46, 203, 140, 28], min sum = "<< minSumOfNonAdjacentKEntries(2, {355, 46, 203, 140, 28})
<< '\n';
cout<< "k = 3, [9, 4, 0, 9, 14, 7, 1], min sum = "<< minSumOfNonAdjacentKEntries(3, {9, 4, 0, 9, 14, 7, 1})
<< '\n';
}
Output:
k = 2, [355, 46, 203, 140, 28], min sum = 74
k = 3, [9, 4, 0, 9, 14, 7, 1], min sum = 10

Related

Finding the number of sum combinations between two arrays that satisfy a condition

The problem:
I have 2 arrays A[v] and M[w], with length v and w, respectively. Given two numbers p and q, I want to find how many combinations of the sum of two elements of these arrays satisfy the following condition:
p >= A[v] + M[w] <= q
An example:
Let:
A = [9, 14, 5, 8, 12, 2, 16],
v = 7,
M = [6, 2, 9, 3, 10],
w = 5,
p = 21,
q = 24
The answer will be 5, because of the following combinations:
14 + 9 = 23
14 + 10 = 24
12 + 9 = 21
12 + 10 = 22
16 + 6 = 22
What I have tried:
The following is an implementation of the problem in C++:
int K = 0; // K is the answer
for (int i=0; i<v; i++) {
for (int j=0; j<w; j++) {
if (A[v]+M[w] >= p && A[v]+M[w] <= q) {
++K;
}
}
}
As we can see the above code uses a loop inside a loop, thus making the time complexity of the program Ο(v×w), pretty slow for large arrays.
The question
Is there a fastest way to solve this problem?
Problem Summary: Given two arrays A and B with sizes v and w respectively, find the number of possible pairings of an element from A and an element from B such that the two elements have a sum that is >= p and <= q.
The simple, brute force algorithm is essentially what you have currently. The brute force algorithm would simply involve testing all possible pairs, which, as you said, would have a time complexity of O(v*w) because there are v ways to choose the first element and w ways to choose the second element when testing all the pairs.
As #thestruggler pointed out in their comment, sorting and binary search could be applied to create a significantly more efficient algorithm.
Let's say we sort B in ascending order. For the test case you provide, we would then have:
A = [9, 14, 5, 8, 12, 2, 16]
B = [2, 3, 6, 9, 10]
p = 21 and q = 24
Now, notice that for every element in a, we can calculate the range of elements in B that, when added to the element, would have a sum between p and q. We can actually find this range in O(logW) time by using what is called Binary Search. Specifically, if we were looking to pair the first number in A with numbers in B, we would binary search for the index of the first element that is >= 12 and then binary search for the index of the last element that is <= 15. The number of elements in B that would work in a pairing with the element from A is then just equal to 1 plus the difference between the two indexes.
Overall, this algorithm would have a complexity of O(WlogW + VlogW) (or O(VlogV + WlogV); if you want to go above and beyond your program could decide to sort the larger array to save time on testing). This is because sorting an array with N elements takes O(NlogN) time, and because each binary search over a sorted array with N elements takes O(logN).
This can also be solved in following way,
First sort both arrays,
[9, 14, 5, 8, 12, 2, 16] => [2, 5, 8, 9, 12, 14, 16]
[6, 2, 9, 3, 10] => [2, 3, 6, 9, 10]
Now iterate all elements of smaller array and do following,
[2, 3, 6, 9, 10],
current element is 2, subtract it with p, lets say it is num it means,
num = p - 2 = 21 - 2 = 19
Then all numbers in other array, grater than of equals to 19 will make sum 21 with 2. But no element in other array is grater than or equals to 19 It means by adding 2 with any element of other array can not grater than or equals to p,
Next element which is 3 and it also can not fulfill the requirement, same can be done with other element, so let's directly move to element 9 for explanation,
[2, 3, 6, 9, 10]
num = p - 9 = 21 - 9 = 12 and by getting lower bound of 12, we will get all numbers, those sum with 9 will be grater than or equal to p(21), as highlighted below,
[2, 5, 8, 9, 12, 14, 16],
Sum of these numbers with 9 is grater than or equals to p, now it is time to find how may of them will produce sum which is less then or equals to q, so to doing that we have to do following,
num = q - 9 = 24 - 9 = 15 and by finding upper bound of 15 will give all the numbers sum with 9 shall be less than of equals to q as highlighted below,
[2, 5, 8, 9, 12, 14, 16],
This way you can find all combinations having sum, p >= sum <= q,
#include <iostream>
#include <vector>
#include <algorithm>
std::size_t combinationCount(int p, int q, std::vector<int> arr1, std::vector<int> arr2){
std::sort(arr1.begin(), arr1.end());
std::sort(arr2.begin(), arr2.end());
std::vector<int>::const_iterator it1 = arr1.cbegin();
std::vector<int>::const_iterator endIt1 = arr1.cend();
std::vector<int>::const_iterator it2 = arr2.cbegin();
std::vector<int>::const_iterator endIt2 = arr2.cend();
if(arr2.size() < arr1.size()){
std::swap(it1, it2);
std::swap(endIt1, endIt2);
}
std::size_t count = 0;
for(; endIt1 != it1; ++it1){
int num = p - *it1;
std::vector<int>::const_iterator lowBoundOfPIt = std::lower_bound(it2, endIt2, num);
if(endIt2 != lowBoundOfPIt){
num = q - *it1;
std::vector<int>::const_iterator upBoundOfQIt = std::upper_bound(it2, endIt2, num);
count += (upBoundOfQIt - lowBoundOfPIt);
}
}
return count;
}
int main(){
std::cout<< "count = "<< combinationCount(21, 24, {9, 14, 5, 8, 12, 2, 16}, {6, 2, 9, 3, 10})<< '\n';
}
Output : 5

Hand executing a C++ vector

I'm new to programming and C++, in my course I need to hand execute a program and show how the elements change and which ones. I'm a bit stuck on this but I think I'm on the right track. Any assistance would be really appreciated.
void data(vector<double> &data, int idx, double value)
{
data.push_back(value);
if (idx >= data.size() - 1) return;
if (idx < 0) idx = 0;
for(int i = data.size() - 1; i > idx; i--)
{
data[i] = data[i -1];
data[i - 1] = value;
}
}
The data set I'm using is:
[4, -6, 0, 8, -7]
idx: 2
value: -7
So the -7 value is what is push_back onto the end of the vector
I think I've figured out some of it, data.size() - 1 means the last element in the array and if the idx is greater or equal to the last element return that value? The for loop seems to iterate backwards to me.
If your problem is to figure out the purpose of this algorithm, read this answer.
Let's first take your example:
std::vector<double> a{ 4, -6, 0, 8, -7 };
data(a, 2, -7);
The result is: 4, -6, -7, 0, 8, -7
It should be clear that data(vec, idx, val) inserts val into the vec so that it is the idxth element and the vec increased its size by 1.
If idx is out of range, it is adjusted to 0 (if < 0) or vec.size() (if >= vec.size().)
Edit:
Visualization:
Initially:
4, -6, 0, 8, -7, -7
First iteration I = data.size() - 1 = 5:
4, -6, 0, 8, -7, -7 (data[5] = data[4])
4, -6, 0, 8, -7, -7 (data[4] = value)
(Note: here -7 = -7 so nothing changes)
Second iteration I = 4:
4, -6, 0, 8, 8, -7 (data[4] = data[3])
4, -6, 0, -7, 8, -7 (data[3] = value)
Third iteration I = 3:
4, -6, 0, 0, 8, -7 (data[3] = data[2])
4, -6, -7, 0, 8, -7 (data[2] = value)
Now I = 2, over.
if (idx >= data.size() - 1) return;
Actually you check that the index isn't outside of the array. data.size() - 1 is the last element, so idx can be the second last element at most. We will see why this.
if (idx < 0) idx = 0;
If the index is lower than 0, just set it to 0 to access the first element
for(int i = data.size() - 1; i > idx; i--)
You start with the index of the last element, and as long as it is greater than idx you continue with another iteration (and decrement it). So in your example you would have two iterations with i = 4 and i = 3. idx is like a lower exclusive bound
data[i] = data[i -1];
data[i - 1] = value;
You first copy the previous element to the current one, and then the value (-7 in your case) to the previous element. So in the last iteration i-1 will be the same as idx. And because of that idx cannot be the last element, because then the loop won't enter.
So what this actually does, is inserting value step-by-step from the end of the vector to the position idx. The last element is lost and the others slide one position up. Every iteration it gets one position more to the left and what was there before steps up.

Time complexity of an iterative algorithm

I am trying to find the Time Complexity of this algorithm.
The iterative: algorithm produces all the bit-strings within a given Hamming distance, from the input bit-string. It generates all increasing sequences 0 <= a[0] < ... < a[dist-1] < strlen(num), and reverts bits at corresponding indices.
The vector a is supposed to keep indices for which bits have to be inverted. So if a contains the current index i, we print 1 instead of 0 and vice versa. Otherwise we print the bit as is (see else-part), as shown below:
// e.g. hamming("0000", 2);
void hamming(const char* num, size_t dist) {
assert(dist > 0);
vector<int> a(dist);
size_t k = 0, n = strlen(num);
a[k] = -1;
while (true)
if (++a[k] >= n)
if (k == 0)
return;
else {
--k;
continue;
}
else
if (k == dist - 1) {
// this is an O(n) operation and will be called
// (n choose dist) times, in total.
print(num, a);
}
else {
a[k+1] = a[k];
++k;
}
}
What is the Time Complexity of this algorithm?
My attempt says:
dist * n + (n choose t) * n + 2
but this seems not to be true, consider the following examples, all with dist = 2:
len = 3, (3 choose 2) = 3 * O(n), 10 while iterations
len = 4, (4 choose 2) = 6 * O(n), 15 while iterations
len = 5, (5 choose 2) = 9 * O(n), 21 while iterations
len = 6, (6 choose 2) = 15 * O(n), 28 while iterations
Here are two representative runs (with the print to be happening at the start of the loop):
000, len = 3
k = 0, total_iter = 1
vector a = -1 0
k = 1, total_iter = 2
vector a = 0 0
Paid O(n)
k = 1, total_iter = 3
vector a = 0 1
Paid O(n)
k = 1, total_iter = 4
vector a = 0 2
k = 0, total_iter = 5
vector a = 0 3
k = 1, total_iter = 6
vector a = 1 1
Paid O(n)
k = 1, total_iter = 7
vector a = 1 2
k = 0, total_iter = 8
vector a = 1 3
k = 1, total_iter = 9
vector a = 2 2
k = 0, total_iter = 10
vector a = 2 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gsamaras#pythagoras:~/Desktop/generate_bitStrings_HammDistanceT$ ./iter
0000, len = 4
k = 0, total_iter = 1
vector a = -1 0
k = 1, total_iter = 2
vector a = 0 0
Paid O(n)
k = 1, total_iter = 3
vector a = 0 1
Paid O(n)
k = 1, total_iter = 4
vector a = 0 2
Paid O(n)
k = 1, total_iter = 5
vector a = 0 3
k = 0, total_iter = 6
vector a = 0 4
k = 1, total_iter = 7
vector a = 1 1
Paid O(n)
k = 1, total_iter = 8
vector a = 1 2
Paid O(n)
k = 1, total_iter = 9
vector a = 1 3
k = 0, total_iter = 10
vector a = 1 4
k = 1, total_iter = 11
vector a = 2 2
Paid O(n)
k = 1, total_iter = 12
vector a = 2 3
k = 0, total_iter = 13
vector a = 2 4
k = 1, total_iter = 14
vector a = 3 3
k = 0, total_iter = 15
vector a = 3 4
The while loop is somewhat clever and subtle, and it's arguable that it's doing two different things (or even three if you count the initialisation of a). That's what's making your complexity calculations challenging, and it's also less efficient than it could be.
In the abstract, to incrementally compute the next set of indices from the current one, the idea is to find the last index, i, that's less than n-dist+i, increment it, and set the following indexes to a[i]+1, a[i]+2, and so on.
For example, if dist=5, n=11 and your indexes are:
0, 3, 5, 9, 10
Then 5 is the last value less than n-dist+i (because n-dist is 6, and 10=6+4, 9=6+3, but 5<6+2).
So we increment 5, and set the subsequent integers to get the set of indexes:
0, 3, 6, 7, 8
Now consider how your code runs, assuming k=4
0, 3, 5, 9, 10
a[k] + 1 is 11, so k becomes 3.
++a[k] is 10, so a[k+1] becomes 10, and k becomes 4.
++a[k] is 11, so k becomes 3.
++a[k] is 11, so k becomes 2.
++a[k] is 6, so a[k+1] becomes 6, and k becomes 3.
++a[k] is 7, so a[k+1] becomes 7, and k becomes 4.
++a[k] is 8, and we continue to call the print function.
This code is correct, but it's not efficient because k scuttles backwards and forwards as it's searching for the highest index that can be incremented without causing an overflow in the higher indices. In fact, if the highest index is j from the end, the code uses a non-linear number iterations of the while loop. You can easily demonstrate this yourself if you trace how many iterations of the while loop occur when n==dist for different values of n. There is exactly one line of output, but you'll see an O(2^n) growth in the number of iterations (in fact, you'll see 2^(n+1)-2 iterations).
This scuttling makes your code needlessly inefficient, and also hard to analyse.
Instead, you can write the code in a more direct way:
void hamming2(const char* num, size_t dist) {
int a[dist];
for (int i = 0; i < dist; i++) {
a[i] = i;
}
size_t n = strlen(num);
while (true) {
print(num, a);
int i;
for (i = dist - 1; i >= 0; i--) {
if (a[i] < n - dist + i) break;
}
if (i < 0) return;
a[i]++;
for (int j = i+1; j<dist; j++) a[j] = a[i] + j - i;
}
}
Now, each time through the while loop produces a new set of indexes. The exact cost per iteration is not straightforward, but since print is O(n), and the remaining code in the while loop is at worst O(dist), the overall cost is O(N_INCR_SEQ(n, dist) * n), where N_INCR_SEQ(n, dist) is the number of increasing sequences of natural numbers < n of length dist. Someone in the comments provides a link that gives a formula for this.
Notice, that given n which represents the length, and t which represents the distance required, the number of increasing, non-negative series of t integers between 1 and n (or in indices form, between 0 and n-1) is indeed n choose t, since we pick t distinct indices.
The problem occurs with your generation of those series:
-First, notice that for example in the case of length 4, you actually go over 5 different indices, 0 to 4.
-Secondly, notice that you are taking in account series with identical indices (in the case of t=2, its 0 0, 1 1, 2 2 and so on), and generally, you would go through every non-decreasing series, instead of through every increasing series.
So for calculating the TC of your program, make sure you take that into account.
Hint: try to make one-to-one correspondence from the universe of those series, to the universe of integer solutions to some equation.
If you need the direct solution, take a look here :
https://math.stackexchange.com/questions/432496/number-of-non-decreasing-sequences-of-length-m
The final solution is (n+t-1) choose (t), but noticing the first bullet, in your program, its actually ((n+1)+t-1) choose (t), since you loop with one extra index.
Denote
((n+1)+t-1) choose (t) =: A , n choose t =: B
overall we get O(1) + B*O(n) + (A-B)*O(1)

All possible combinations of coins

I need to write a program which displays all possible change combinations given an array of denominations [1 , 2, 5, 10, 20, 50, 100, 200] // 1 = 1 cent
Value to make the change from = 300
I'm basing my code on the solution from this site http://www.geeksforgeeks.org/dynamic-programming-set-7-coin-change/
#include<stdio.h>
int count( int S[], int m, int n )
{
int i, j, x, y;
// We need n+1 rows as the table is consturcted in bottom up manner using
// the base case 0 value case (n = 0)
int table[n+1][m];
// Fill the enteries for 0 value case (n = 0)
for (i=0; i<m; i++)
table[0][i] = 1;
// Fill rest of the table enteries in bottom up manner
for (i = 1; i < n+1; i++)
{
for (j = 0; j < m; j++)
{
// Count of solutions including S[j]
x = (i-S[j] >= 0)? table[i - S[j]][j]: 0;
// Count of solutions excluding S[j]
y = (j >= 1)? table[i][j-1]: 0;
// total count
table[i][j] = x + y;
}
}
return table[n][m-1];
}
// Driver program to test above function
int main()
{
int arr[] = {1, 2, 5, 10, 20, 50, 100, 200}; //coins array
int m = sizeof(arr)/sizeof(arr[0]);
int n = 300; //value to make change from
printf(" %d ", count(arr, m, n));
return 0;
}
The program runs fine. It displays the number of all possible combinations, but I need it to be more advanced. The way I need it to work is to display the result in following fashion:
1 cent: n number of possible combinations.
2 cents:
5 cents:
and so on...
How can I modify the code to achieve that ?
Greedy Algorithm Approach
Have this denominations in an int array say, int den[] = [1 , 2, 5, 10, 20, 50, 100, 200]
Iterate over this array
For each iteration do the following
Take the element in the denominations array
Divide the change to be allotted number by the element in denominations array number
If the change allotted number is perfectly divisible by the number in denomination array then you are done with the change for that number.
If the number is not perfectly divisible then check for the remainder and do the same iteration with other number
Exit the inner iteration once you get the value equal to the change number
Do the same for the next denomination available in our denomination array.
Explained with example
den = [1 , 2, 5, 10, 20, 50, 100, 200]
Change to be alloted : 270, let take this as x
and y be the temporary variable
Change map z[coin denomination, count of coins]
int y, z[];
First iteration :
den = 1
x = 270
y = 270/1;
if x is equal to y*den
then z[den, y] // z[1, 270]
Iteration completed
Second Iteration:
den = 2
x = 270
y = 270/2;
if x is equal to y*den
then z[den , y] // [2, 135]
Iteration completed
Lets take a odd number
x = 217 and den = 20
y= 217/20;
now x is not equal to y*den
then update z[den, y] // [20, 10]
find new x = x - den*y = 17
x=17 and identify the next change value by greedy it would be 10
den = 10
y = 17/10
now x is not equal to y*den
then update z[den, y] // [10, 1]
find new x = x - den*y = 7
then do the same and your map would be having following entries
[20, 10]
[10, 1]
[5, 1]
[2, 1]

C++ negative array indices

I want to loop an array then during each loop I want to loop backwards over the previous 5 elements.
So given this array
int arr[24]={3, 1, 4, 1, 7, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4, 6, 2, 6, 4}
and this nested loop
for(int i=0;i<arr.size;i++)
{
for(int h=i-5; h<i; h++)
{
//things happen
}
}
So, if i=0, second loop would loop last few elements 4,6,2,6,5.
How could you handle this?
I'm assuming that:
You only want to go over previous values (i.e. no wrap around) You
You don't actually want arr to be a multi-dimensional array as suggested
by your choice of tags
You want to include the current i in your five values
This is just a small modification to your code that will do (what I think) you are asking:
#include <math>
int main()
{
int arr[24]={3, 1, 4, 1, 7, 5, 9, 2, 6, 5, 3, 5, 8, 9, 7, 9, 3, 2, 3, 8, 4, 6, 2, 6, 4}
for(int i=0;i<arr.size;i++)
{
for(int h = max(i-4, 0); h < i+1; h++)
{
//things happen
}
}
}
note the h = max(i-4, 0) and h < i+1This will reduce the number of iterations of the inner loop so that it starts from index 0 and loops up through the five values up to and including i. (four values and i). h will always be within bounds.
The case where i==arr.size won't be a problem in the inner loop as the outer loop will terminate before that happens (i is always within bounds).
Edit: I saw this comment:
I want the first element to consider the last final 5 elements of the array though.
in which case, your loops should look like:
for(int i=0;i<arr.size;i++)
{
for(int h=0; h<5; h++)
{
int index = (i + arr.size - h) % arr.size;
//things happen
//access array with arr[index];
}
}
This should do what you want:
When i=0, h=0 index=(0+24-0)%24 which is 0. For h=1 we go one less, index=(0+24-1)%24 = 23 and so on for the next values of h.
The code gets the last 5 values, wrapping round, inclusive of the current value. (so will get 20,21,22,23,0 when i=0, 21,22,23,0,1 when i=1)
If you want the five before, non-inclusive, then inner loop should be:
for(int h=1; h<=5; h++)
here is the current output of the loop as it stands:
i 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 ... 22 22 22 22 22 23 23 23 23 23
h 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 ... 0 1 2 3 4 0 1 2 3 4
index 0 23 22 21 20 1 0 23 22 21 2 1 0 23 22 3 2 1 0 23 ... 22 21 20 19 18 23 22 21 20 19
I assume you want it to loop around (don't know why). if so, use modulo:
int index = (h + arr.size) % arr.size;
Using the modulo operator.
for (int i = 0; i < arr.size; i++)
{
for (int h = 5; h > 0; h--)
{
const int array_length = sizeof(arr) / sizeof(arr[0]);
int index = (i - h + array_length) % array_length; // Use 'sizeof(arr) / sizeof(arr[0])' to get the size of the array
//things happen
}
}
Is using if statement not an option?
const int array_size = 24;
int arr[array_size] = { 1,3,4,5,...,2 }
for(int i=0;i<array_size;i++)
{
for(int h=i-5; h<i; h++)
{
int arr_index = (h >= 0) ? h : (array_size + h);
//do your things with arr[arr_index]
}
}
you may also start the nested loop with something like:
for(int h=i-min(i,5);h<i;++h)
{
}
which let you process first 5 cells as well. also, if you are dealing with some kind of signal or image processing consider extending arr to have 29 elements with preceding 5 zeros or whatever value would be suitable, and start the first for-loop with 5th element.
Just make an if statement in nested loop. Something like this
for( int h = i-5; h < i; h++ )
{
// do stuff
if( i == 0 )
break;
}