Given an integer array A of size N, find minimum sum of K non-neighboring entries (entries cant be adjacent to one another, for example, if K was 2, you cant add A[2], A[3] and call it minimum sum, even if it was, because those are adjacent/neighboring to one another), example:
A[] = {355, 46, 203, 140, 28}, k = 2, result would be 74 (46 + 28)
A[] = {9, 4, 0, 9, 14, 7, 1}, k = 3, result would be 10 (9 + 0 + 1)
The problem is somewhat similar to House Robber on leetcode, except instead of finding maximum sum of non-adjacent entries, we are tasked to find the minimum sum and with constraint K entries.
From my prespective, this is clearly a dynamic programming problem, so i tried to break down the problem recursively and implemented something like this:
#include <vector>
#include <iostream>
using namespace std;
int minimal_k(vector<int>& nums, int i, int k)
{
if (i == 0) return nums[0];
if (i < 0 || !k) return 0;
return min(minimal_k(nums, i - 2, k - 1) + nums[i], minimal_k(nums, i - 1, k));
}
int main()
{
// example above
vector<int> nums{9, 4, 0, 9, 14, 7, 1};
cout << minimal_k(nums, nums.size() - 1, 3);
// output is 4, wrong answer
}
This was my attempt at the solution, I have played around a lot with this but no luck, so what would be a solution to this problem?
This line:
if (i < 0 || !k) return 0;
If k is 0, you should probably return return 0. But if i < 0 or if the effective length of the array is less than k, you probably need to return a VERY LARGE value such that the summed result goes higher than any valid solution.
In my solution, I have the recursion return INT_MAX as a long long when recursing into an invalid subset or when k exceeds the remaining length.
And as with any of these dynamic programming and recursion problems, a cache of results so that you don't repeat the same recursive search will help out a bunch. This will speed things up by several orders of magnitude for very large input sets.
Here's my solution.
#include <iostream>
#include <vector>
#include <unordered_map>
#include <algorithm>
using namespace std;
// the "cache" is a map from offset to another map
// that tracks k to a final result.
typedef unordered_map<size_t, unordered_map<size_t, long long>> CACHE_MAP;
bool get_cache_result(const CACHE_MAP& cache, size_t offset, size_t k, long long& result);
void insert_into_cache(CACHE_MAP& cache, size_t offset, size_t k, long long result);
long long minimal_k_impl(const vector<int>& nums, size_t offset, size_t k, CACHE_MAP& cache)
{
long long result = INT_MAX;
size_t len = nums.size();
if (k == 0)
{
return 0;
}
if (offset >= len)
{
return INT_MAX; // exceeded array boundary, return INT_MAX
}
size_t effective_length = len - offset;
// If we have more k than remaining elements, return INT_MAX to indicate
// that this recursion is invalid
// you might be able to reduce to checking (effective_length/2+1 < k)
if ( (effective_length < k) || ((effective_length == k) && (k != 1)) )
{
return INT_MAX;
}
if (get_cache_result(cache, offset, k, result))
{
return result;
}
long long sum1 = nums[offset] + minimal_k_impl(nums, offset + 2, k - 1, cache);
long long sum2 = minimal_k_impl(nums, offset + 1, k, cache);
result = std::min(sum1, sum2);
insert_into_cache(cache, offset, k, result);
return result;
}
long long minimal_k(const vector<int>& nums, size_t k)
{
CACHE_MAP cache;
return minimal_k_impl(nums, 0, k, cache);
}
bool get_cache_result(const CACHE_MAP& cache, size_t offset, size_t k, long long& result)
{
// effectively this code does this:
// result = cache[offset][k]
bool ret = false;
auto itor1 = cache.find(offset);
if (itor1 != cache.end())
{
auto& inner_map = itor1->second;
auto itor2 = inner_map.find(k);
if (itor2 != inner_map.end())
{
ret = true;
result = itor2->second;
}
}
return ret;
}
void insert_into_cache(CACHE_MAP& cache, size_t offset, size_t k, long long result)
{
cache[offset][k] = result;
}
int main()
{
vector<int> nums1{ 355, 46, 203, 140, 28 };
vector<int> nums2{ 9, 4, 0, 9, 14, 7, 1 };
vector<int> nums3{8,6,7,5,3,0,9,5,5,5,1,2,9,-10};
long long result = minimal_k(nums1, 2);
std::cout << result << std::endl;
result = minimal_k(nums2, 3);
std::cout << result << std::endl;
result = minimal_k(nums3, 3);
std::cout << result << std::endl;
return 0;
}
It is core sorting related problem. To find sum of minimum k non adjacent elements requires minimum value elements to bring next to each other by sorting. Let's see this sorting approach,
Given input array = [9, 4, 0, 9, 14, 7, 1] and k = 3
Create another array which contains elements of input array with indexes as showed below,
[9, 0], [4, 1], [0, 2], [9, 3], [14, 4], [7, 5], [1, 6]
then sort this array.
Motive behind this element and index array is, after sorting information of index of each element will not be lost.
One more array is required to keep record of used indexes, so initial view of information after sorting is as showed below,
Element and Index array
..............................
| 0 | 1 | 4 | 7 | 9 | 9 | 14 |
..............................
2 6 1 5 3 0 4 <-- Index
Used index record array
..............................
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
..............................
0 1 2 3 4 5 6 <-- Index
In used index record array 0 (false) means element at this index is not included yet in minimum sum.
Front element of sorted array is minimum value element and we include it for minimum sum and update used index record array to indicate that this element is used, as showed below,
font element is 0 at index 2 and due to this set 1(true) at index 2 of used index record array showed below,
min sum = 0
Used index record array
..............................
| 0 | 0 | 1 | 0 | 0 | 0 | 0 |
..............................
0 1 2 3 4 5 6
iterate to next element in sorted array and as you can see above it is 1 and have index 6. To include 1 in minimum sum we have to find, is left or right adjacent element of 1 already used or not, so 1 has index 6 and it is last element in input array it means we only have to check if value of index 5 is already used or not, and this can be done by looking at used index record array, and as showed above usedIndexRerocd[5] = 0 so 1 can be considered for minimum sum. After using 1, state updated to following,
min sum = 0 + 1
Used index record array
..............................
| 0 | 0 | 1 | 0 | 0 | 0 | 1 |
..............................
0 1 2 3 4 5 6
than iterate to next element which is 4 at index 1 but this can not be considered because element at index 0 is already used, same happen with elements 7, 9 because these are at index 5, 3 respectively and adjacent to used elements.
Finally iterating to 9 at index = 0 and by looking at used index record array usedIndexRecordArray[1] = 0 and that's why 9 can be included in minimum sum and final state reached to following,
min sum = 0 + 1 + 9
Used index record array
..............................
| 1 | 0 | 1 | 0 | 0 | 0 | 1 |
..............................
0 1 2 3 4 5 6
Finally minimum sum = 10,
One of the Worst case scenario when input array is already sorted then at least 2*k - 1 elements have to be iterated to find minimum sum of non adjacent k elements as showed below
input array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] and k = 4 then following highlighted elements shall be considered for minimum sum,
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Note: You have to include all input validation, like one of the validation is, if you want to find minimum sum of k non adjacent elements then input should have at least 2*k - 1 elements. I am not including these validations because i am aware of all input constraints of problem.
#include <iostream>
#include <vector>
#include <algorithm>
using std::cout;
long minSumOfNonAdjacentKEntries(std::size_t k, const std::vector<int>& arr){
if(arr.size() < 2){
return 0;
}
std::vector<std::pair<int, std::size_t>> numIndexArr;
numIndexArr.reserve(arr.size());
for(std::size_t i = 0, arrSize = arr.size(); i < arrSize; ++i){
numIndexArr.emplace_back(arr[i], i);
}
std::sort(numIndexArr.begin(), numIndexArr.end(), [](const std::pair<int, std::size_t>& a,
const std::pair<int, std::size_t>& b){return a.first < b.first;});
long minSum = numIndexArr.front().first;
std::size_t elementCount = 1;
std::size_t lastIndex = arr.size() - 1;
std::vector<bool> usedIndexRecord(arr.size(), false);
usedIndexRecord[numIndexArr.front().second] = true;
for(std::vector<std::pair<int, std::size_t>>::const_iterator it = numIndexArr.cbegin() + 1,
endIt = numIndexArr.cend(); elementCount < k && endIt != it; ++it){
bool leftAdjacentElementUsed = (0 == it->second) ? false : usedIndexRecord[it->second - 1];
bool rightAdjacentElementUsed = (lastIndex == it->second) ? false : usedIndexRecord[it->second + 1];
if(!leftAdjacentElementUsed && !rightAdjacentElementUsed){
minSum += it->first;
++elementCount;
usedIndexRecord[it->second] = true;
}
}
return minSum;
}
int main(){
cout<< "k = 2, [355, 46, 203, 140, 28], min sum = "<< minSumOfNonAdjacentKEntries(2, {355, 46, 203, 140, 28})
<< '\n';
cout<< "k = 3, [9, 4, 0, 9, 14, 7, 1], min sum = "<< minSumOfNonAdjacentKEntries(3, {9, 4, 0, 9, 14, 7, 1})
<< '\n';
}
Output:
k = 2, [355, 46, 203, 140, 28], min sum = 74
k = 3, [9, 4, 0, 9, 14, 7, 1], min sum = 10
I'm trying to solve MexDoubleSliceSum problem without Kandane's bidirectional algorithm.
Problem Definition:
A non-empty array A consisting of N integers is given.
A triplet (X, Y, Z), such that 0 ≤ X < Y < Z < N, is called a double
slice.
The sum of double slice (X, Y, Z) is the total of A[X + 1] + A[X + 2]
+ ... + A[Y − 1] + A[Y + 1] + A[Y + 2] + ... + A[Z − 1].
For example, array A such that:
A[0] = 3
A[1] = 2
A[2] = 6
A[3] = -1
A[4] = 4
A[5] = 5
A[6] = -1
A[7] = 2
The goal is to find the maximal sum of any double slice.
that, given a non-empty array A consisting of N integers, returns the
maximal sum of any double slice.
For example, given:
A[0] = 3
A[1] = 2
A[2] = 6
A[3] = -1
A[4] = 4
A[5] = 5
A[6] = -1
A[7] = 2
the function should return 17, because no double slice of array A has
a sum of greater than 17.
I have figured out following idea:
I'm taking a slice and putting lever (value in the middle that's being dropped) to lowest value included in this slice. If I notice that next value is lowering total sum i'm changing lever to it and reducing sum with values before last lever(including old lever).
int solution(vector<int> &A) {
if(A.size()<4)
return 0;
int lever=A[1];
int sum=-lever;
int presliceValue=0;
int maxVal=A[1];
for(int i=1;i<A.size()-1;i++){
if(sum+A[i]<sum || A[i]<lever){
sum+=lever;
if(presliceValue<0)
sum=sum-presliceValue;
lever=A[i];
presliceValue=sum+lever;
}
else
sum=sum+A[i];
if(sum>maxVal)
maxVal=sum;
}
return maxVal;
}
This solution returns wrong value on few test cases (unfortunately cannot tell what's tested values):
unfortunately i cannot reproduce following error and codility does not share test values.
Failed Test cases
many the same small sequences, length = ~100,000
large random: random, length = ~100,000
random, numbers from -30 to 30, length = 300
random, numbers form -104 to 104, length = 70
I have list
num1 = [1, 3, 5]
num2 = [2, 4, 6]
I need to print all integers in both list from 1 to 6 in order with out using sort or append. I must use a for loop and a counter. the counter should be used as a pointer.
I have tried different ways of using x in the for loop but cant seem to get a value of x that will point to each specific index
x=0
for num in num1:
if (num1 + num2)[x] > num:
print num
elif (num1 + num2)[x] < num:
print num[x]
else: x = x + 1
I expect my output to be
1
2
3
4
5
6
but I continue to get error messages
Though not explicitly mentioned, I believe the assumption here is that both the lists num1 and num2 are already sorted. With that assumption in place, one way to achieve this is to walk both the lists simultaneously and keep printing the smaller value. See the sample below.
num1 = [ 1, 3, 5, 7, 13, 17, 19]
num2 = [2, 4, 6, 8, 10]
l = num1
oth = num2
if len(num2) > len(num1):
l = num2
oth = num1
m = len(l)
i = 0
j = 0
while i < m:
if j >= len(oth) or l[i] < oth[j]:
print(l[i])
i += 1
else:
print(oth[j])
j += 1
while j < len(oth):
print(oth[j])
j += 1
I am trying to find the Time Complexity of this algorithm.
The iterative: algorithm produces all the bit-strings within a given Hamming distance, from the input bit-string. It generates all increasing sequences 0 <= a[0] < ... < a[dist-1] < strlen(num), and reverts bits at corresponding indices.
The vector a is supposed to keep indices for which bits have to be inverted. So if a contains the current index i, we print 1 instead of 0 and vice versa. Otherwise we print the bit as is (see else-part), as shown below:
// e.g. hamming("0000", 2);
void hamming(const char* num, size_t dist) {
assert(dist > 0);
vector<int> a(dist);
size_t k = 0, n = strlen(num);
a[k] = -1;
while (true)
if (++a[k] >= n)
if (k == 0)
return;
else {
--k;
continue;
}
else
if (k == dist - 1) {
// this is an O(n) operation and will be called
// (n choose dist) times, in total.
print(num, a);
}
else {
a[k+1] = a[k];
++k;
}
}
What is the Time Complexity of this algorithm?
My attempt says:
dist * n + (n choose t) * n + 2
but this seems not to be true, consider the following examples, all with dist = 2:
len = 3, (3 choose 2) = 3 * O(n), 10 while iterations
len = 4, (4 choose 2) = 6 * O(n), 15 while iterations
len = 5, (5 choose 2) = 9 * O(n), 21 while iterations
len = 6, (6 choose 2) = 15 * O(n), 28 while iterations
Here are two representative runs (with the print to be happening at the start of the loop):
000, len = 3
k = 0, total_iter = 1
vector a = -1 0
k = 1, total_iter = 2
vector a = 0 0
Paid O(n)
k = 1, total_iter = 3
vector a = 0 1
Paid O(n)
k = 1, total_iter = 4
vector a = 0 2
k = 0, total_iter = 5
vector a = 0 3
k = 1, total_iter = 6
vector a = 1 1
Paid O(n)
k = 1, total_iter = 7
vector a = 1 2
k = 0, total_iter = 8
vector a = 1 3
k = 1, total_iter = 9
vector a = 2 2
k = 0, total_iter = 10
vector a = 2 3
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
gsamaras#pythagoras:~/Desktop/generate_bitStrings_HammDistanceT$ ./iter
0000, len = 4
k = 0, total_iter = 1
vector a = -1 0
k = 1, total_iter = 2
vector a = 0 0
Paid O(n)
k = 1, total_iter = 3
vector a = 0 1
Paid O(n)
k = 1, total_iter = 4
vector a = 0 2
Paid O(n)
k = 1, total_iter = 5
vector a = 0 3
k = 0, total_iter = 6
vector a = 0 4
k = 1, total_iter = 7
vector a = 1 1
Paid O(n)
k = 1, total_iter = 8
vector a = 1 2
Paid O(n)
k = 1, total_iter = 9
vector a = 1 3
k = 0, total_iter = 10
vector a = 1 4
k = 1, total_iter = 11
vector a = 2 2
Paid O(n)
k = 1, total_iter = 12
vector a = 2 3
k = 0, total_iter = 13
vector a = 2 4
k = 1, total_iter = 14
vector a = 3 3
k = 0, total_iter = 15
vector a = 3 4
The while loop is somewhat clever and subtle, and it's arguable that it's doing two different things (or even three if you count the initialisation of a). That's what's making your complexity calculations challenging, and it's also less efficient than it could be.
In the abstract, to incrementally compute the next set of indices from the current one, the idea is to find the last index, i, that's less than n-dist+i, increment it, and set the following indexes to a[i]+1, a[i]+2, and so on.
For example, if dist=5, n=11 and your indexes are:
0, 3, 5, 9, 10
Then 5 is the last value less than n-dist+i (because n-dist is 6, and 10=6+4, 9=6+3, but 5<6+2).
So we increment 5, and set the subsequent integers to get the set of indexes:
0, 3, 6, 7, 8
Now consider how your code runs, assuming k=4
0, 3, 5, 9, 10
a[k] + 1 is 11, so k becomes 3.
++a[k] is 10, so a[k+1] becomes 10, and k becomes 4.
++a[k] is 11, so k becomes 3.
++a[k] is 11, so k becomes 2.
++a[k] is 6, so a[k+1] becomes 6, and k becomes 3.
++a[k] is 7, so a[k+1] becomes 7, and k becomes 4.
++a[k] is 8, and we continue to call the print function.
This code is correct, but it's not efficient because k scuttles backwards and forwards as it's searching for the highest index that can be incremented without causing an overflow in the higher indices. In fact, if the highest index is j from the end, the code uses a non-linear number iterations of the while loop. You can easily demonstrate this yourself if you trace how many iterations of the while loop occur when n==dist for different values of n. There is exactly one line of output, but you'll see an O(2^n) growth in the number of iterations (in fact, you'll see 2^(n+1)-2 iterations).
This scuttling makes your code needlessly inefficient, and also hard to analyse.
Instead, you can write the code in a more direct way:
void hamming2(const char* num, size_t dist) {
int a[dist];
for (int i = 0; i < dist; i++) {
a[i] = i;
}
size_t n = strlen(num);
while (true) {
print(num, a);
int i;
for (i = dist - 1; i >= 0; i--) {
if (a[i] < n - dist + i) break;
}
if (i < 0) return;
a[i]++;
for (int j = i+1; j<dist; j++) a[j] = a[i] + j - i;
}
}
Now, each time through the while loop produces a new set of indexes. The exact cost per iteration is not straightforward, but since print is O(n), and the remaining code in the while loop is at worst O(dist), the overall cost is O(N_INCR_SEQ(n, dist) * n), where N_INCR_SEQ(n, dist) is the number of increasing sequences of natural numbers < n of length dist. Someone in the comments provides a link that gives a formula for this.
Notice, that given n which represents the length, and t which represents the distance required, the number of increasing, non-negative series of t integers between 1 and n (or in indices form, between 0 and n-1) is indeed n choose t, since we pick t distinct indices.
The problem occurs with your generation of those series:
-First, notice that for example in the case of length 4, you actually go over 5 different indices, 0 to 4.
-Secondly, notice that you are taking in account series with identical indices (in the case of t=2, its 0 0, 1 1, 2 2 and so on), and generally, you would go through every non-decreasing series, instead of through every increasing series.
So for calculating the TC of your program, make sure you take that into account.
Hint: try to make one-to-one correspondence from the universe of those series, to the universe of integer solutions to some equation.
If you need the direct solution, take a look here :
https://math.stackexchange.com/questions/432496/number-of-non-decreasing-sequences-of-length-m
The final solution is (n+t-1) choose (t), but noticing the first bullet, in your program, its actually ((n+1)+t-1) choose (t), since you loop with one extra index.
Denote
((n+1)+t-1) choose (t) =: A , n choose t =: B
overall we get O(1) + B*O(n) + (A-B)*O(1)
I'm student of second year on CS. On my algorithms and data structures course I've been tasked with following problem:
Input:
2<=r<=20
2<=o<=10
0<=di<=100
Output:
number of combinations
or "NO" if there are none
r is number of integers
di are said integers
o is number of groups
I have to find the number of correct combinations. The correct combination is one where every integer is assigned to some group, none of the groups are empty and the sum of integers in every group is the same:
For an instance:
r = 4;
di = {5, 4, 5, 6}
o = 2;
So the sum of integers in every group should add up to 10:
5 + 4 + 5 + 6 = 20
20 / o = 20 / 2 = 10
So we can make following groups:
{5, 5}, {4, 6}
{5, 5}, {6, 4}
{5, 5}, {4, 6}
{5, 5}, {6, 5}
So as we can see, the every combination is essentialy same as first one.( The order of elements in group doesnt matter.)
So actually we have just one correct combination: {5, 5}, {4, 6}. Which means output is equal to one.
Other examples:
r = 4;
di = {10, 2, 8, 6}
o = 2;
10 + 2 + 8 + 6 = 26;
26 / o = 26 / 2 = 13
There is no way to make such a sum of these integers, so the output is "NO".
I had a following idea of getting this thing done:
struct Input { // holds data
int num; // number of integers
int groups; // number of groups
int sumPerGroup; // sum of integers per group
int *integers; // said integers
};
bool f(bool *t, int s) { // generates binary numbers (right to left"
int i = 0;
while (t[i]) i++;
t[i] = 1;
if (i >= s) return true;
if (!t[i + 1])
for (int j = i - 1; j >= 0; j--)
t[j] = 0;
return false;
}
void solve(Input *input, int &result) {
bool bin[input->num]; // holds generated binary numbers
bool used[input->num]; // integers already used
for (int i = 0; i < input->num; i++) {
bin[i] = 0;
used[i] = 0;
}
int solved = 0;
do {
int sum = 0;
for (int i = 0; i < input->num; i++) { // checking if generated combination gets me nice sum
if (sum > input->sumPerGroup) break;
if (bin[i] && !used[i]) sum += input->integers[i]; // if generated combination wasnt used before, start adding up
if (sum == input->sumPerGroup) { // if its add up as it shoul
for (int j = 0; j < input->num; j++) used[j] = bin[j]; // mark integers as used
solved ++; // and mark group as solved
sum = 0;
}
if (udane == input->groups) { // if the number of solved groups is equal to number of groups
result ++; // it means we found another correct combination
solved = 0;
}
}
} while (!f(bin, input->num)); // as long as I can get more combinations
}
So, the main idea is:
1. I generate combination of some numbers as binary number
2. I check if that combination gets me a nice sum
3. If it does, I mark that up
4. Rinse and repeat.
So for input from first example {5, 4, 5, 6} in 2 groups:
5 4 5 6
-------
0 0 0 0
1 0 0 0
...
1 0 1 0 -> this one is fine, becouse 5 + 5 = 10; I mark it as used
1 1 1 0
...
0 1 0 1 -> another one works (4 + 6 = 10); Marked as used
So far i got myself 2 working groups which is equal to 2 groups - job done, it's a correct combination.
The real problem behind my idea is that I have no way of using some integer once I mark it as "used". This way in more complicated examples I would miss quite alot of correct groups. My question is, what is correct approach to this kind of problem? I've tried recursive approach and it didin't work any better (for the same reason)
Another idea I had is to permutate (std:next_permutate(...) for instance) integers from input each time I mark some group as used, but even on paper that looks silly.
I don't ask you to solve that problem for me, but if you could point any flaws in my reasoning that would be terrific.
Also, not a native speaker. So I'd like to apologise in advance if I butchered any sentence (I know i did).