Given heights of n towers and a value k. We need to either increase or decrease height of every tower by k (only once) where k > 0. The task is to minimize the difference between the heights of the longest and the shortest tower after modifications, and output this difference.
I get the intuition behind the solution but I can not comment on the correctness of the solution below.
// C++ program to find the minimum possible
// difference between maximum and minimum
// elements when we have to add/subtract
// every number by k
#include <bits/stdc++.h>
using namespace std;
// Modifies the array by subtracting/adding
// k to every element such that the difference
// between maximum and minimum is minimized
int getMinDiff(int arr[], int n, int k)
{
if (n == 1)
return 0;
// Sort all elements
sort(arr, arr+n);
// Initialize result
int ans = arr[n-1] - arr[0];
// Handle corner elements
int small = arr[0] + k;
int big = arr[n-1] - k;
if (small > big)
swap(small, big);
// Traverse middle elements
for (int i = 1; i < n-1; i ++)
{
int subtract = arr[i] - k;
int add = arr[i] + k;
// If both subtraction and addition
// do not change diff
if (subtract >= small || add <= big)
continue;
// Either subtraction causes a smaller
// number or addition causes a greater
// number. Update small or big using
// greedy approach (If big - subtract
// causes smaller diff, update small
// Else update big)
if (big - subtract <= add - small)
small = subtract;
else
big = add;
}
return min(ans, big - small);
}
// Driver function to test the above function
int main()
{
int arr[] = {4, 6};
int n = sizeof(arr)/sizeof(arr[0]);
int k = 10;
cout << "\nMaximum difference is "
<< getMinDiff(arr, n, k);
return 0;
}
Can anyone help me provide the correct solution to this problem?
The codes above work, however I don't find much explanation so I'll try to add some in order to help develop intuition.
For any given tower, you have two choices, you can either increase its height or decrease it.
Now if you decide to increase its height from say Hi to Hi + K, then you can also increase the height of all shorter towers as that won't affect the maximum. Similarly, if you decide to decrease the height of a tower from Hi to Hi − K, then you can also decrease the heights of all taller towers.
We will make use of this, we have n buildings, and we'll try to make each of the building the highest and see making which building the highest gives us the least range of heights(which is our answer). Let me explain:
So what we want to do is - 1) We first sort the array(you will soon see why).
2) Then for every building from i = 0 to n-2[1] , we try to make it the highest (by adding K to the building, adding K to the buildings on its left and subtracting K from the buildings on its right).
So say we're at building Hi, we've added K to it and the buildings before it and subtracted K from the buildings after it. So the minimum height of the buildings will now be min(H0 + K, Hi+1 - K), i.e. min(1st building + K, next building on right - K).
(Note: This is because we sorted the array. Convince yourself by taking a few examples.)
Likewise, the maximum height of the buildings will be max(Hi + K, Hn-1 - K), i.e. max(current building + K, last building on right - K).
3) max - min gives you the range.
[1]Note that when i = n-1. In this case, there is no building after the current building, so we're adding K to every building, so the range will merely be
height[n-1] - height[0] since K is added to everything, so it cancels out.
Here's a Java implementation based on the idea above:
class Solution {
int getMinDiff(int[] arr, int n, int k) {
Arrays.sort(arr);
int ans = arr[n-1] - arr[0];
int smallest = arr[0] + k, largest = arr[n-1]-k;
for(int i = 0; i < n-1; i++){
int min = Math.min(smallest, arr[i+1]-k);
int max = Math.max(largest, arr[i]+k);
if (min < 0) continue;
ans = Math.min(ans, max-min);
}
return ans;
}
}
int getMinDiff(int a[], int n, int k) {
sort(a,a+n);
int i,mx,mn,ans;
ans = a[n-1]-a[0]; // this can be one possible solution
for(i=0;i<n;i++)
{
if(a[i]>=k) // since height of tower can't be -ve so taking only +ve heights
{
mn = min(a[0]+k, a[i]-k);
mx = max(a[n-1]-k, a[i-1]+k);
ans = min(ans, mx-mn);
}
}
return ans;
}
This is C++ code, it passed all the test cases.
This python code might be of some help to you. Code is self explanatory.
def getMinDiff(arr, n, k):
arr = sorted(arr)
ans = arr[-1]-arr[0] #this case occurs when either we subtract k or add k to all elements of the array
for i in range(n):
mn=min(arr[0]+k, arr[i]-k) #after sorting, arr[0] is minimum. so adding k pushes it towards maximum. We subtract k from arr[i] to get any other worse (smaller) minimum. worse means increasing the diff b/w mn and mx
mx=max(arr[n-1]-k, arr[i]+k) # after sorting, arr[n-1] is maximum. so subtracting k pushes it towards minimum. We add k to arr[i] to get any other worse (bigger) maximum. worse means increasing the diff b/w mn and mx
ans = min(ans, mx-mn)
return ans
Here's a solution:-
But before jumping on to the solution, here's some info that is required to understand it. In the best case scenario, the minimum difference would be zero. This could happen only in two cases - (1) the array contain duplicates or (2) for an element, lets say 'x', there exists another element in the array which has the value 'x + 2*k'.
The idea is pretty simple.
First we would sort the array.
Next, we will try to find either the optimum value (for which the answer would come out to be zero) or at least the closest number to the optimum value using Binary Search
Here's a Javascript implementation of the algorithm:-
function minDiffTower(arr, k) {
arr = arr.sort((a,b) => a-b);
let minDiff = Infinity;
let prev = null;
for (let i=0; i<arr.length; i++) {
let el = arr[i];
// Handling case when the array have duplicates
if (el == prev) {
minDiff = 0;
break;
}
prev = el;
let targetNum = el + 2*k; // Lets say we have an element 10. The difference would be zero when there exists an element with value 10+2*k (this is the 'optimum value' as discussed in the explaination
let closestMatchDiff = Infinity; // It's not necessary that there would exist 'targetNum' in the array, so we try to find the closest to this number using Binary Search
let lb = i+1;
let ub = arr.length-1;
while (lb<=ub) {
let mid = lb + ((ub-lb)>>1);
let currMidDiff = arr[mid] > targetNum ? arr[mid] - targetNum : targetNum - arr[mid];
closestMatchDiff = Math.min(closestMatchDiff, currMidDiff);
if (arr[mid] == targetNum) break; // in this case the answer would be simply zero, no need to proceed further
else if (arr[mid] < targetNum) lb = mid+1;
else ub = mid-1;
}
minDiff = Math.min(minDiff, closestMatchDiff);
}
return minDiff;
}
Here is the C++ code, I have continued from where you left. The code is self-explanatory.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int minDiff(int arr[], int n, int k)
{
// If the array has only one element.
if (n == 1)
{
return 0;
}
//sort all elements
sort(arr, arr + n);
//initialise result
int ans = arr[n - 1] - arr[0];
//Handle corner elements
int small = arr[0] + k;
int big = arr[n - 1] - k;
if (small > big)
{
// Swap the elements to keep the array sorted.
int temp = small;
small = big;
big = temp;
}
//traverse middle elements
for (int i = 0; i < n - 1; i++)
{
int subtract = arr[i] - k;
int add = arr[i] + k;
// If both subtraction and addition do not change the diff.
// Subtraction does not give new minimum.
// Addition does not give new maximum.
if (subtract >= small or add <= big)
{
continue;
}
// Either subtraction causes a smaller number or addition causes a greater number.
//Update small or big using greedy approach.
// if big-subtract causes smaller diff, update small Else update big
if (big - subtract <= add - small)
{
small = subtract;
}
else
{
big = add;
}
}
return min(ans, big - small);
}
int main(void)
{
int arr[] = {1, 5, 15, 10};
int n = sizeof(arr) / sizeof(arr[0]);
int k = 3;
cout << "\nMaximum difference is: " << minDiff(arr, n, k) << endl;
return 0;
}
class Solution {
public:
int getMinDiff(int arr[], int n, int k) {
sort(arr, arr+n);
int diff = arr[n-1]-arr[0];
int mine, maxe;
for(int i = 0; i < n; i++)
arr[i]+=k;
mine = arr[0];
maxe = arr[n-1]-2*k;
for(int i = n-1; i > 0; i--){
if(arr[i]-2*k < 0)
break;
mine = min(mine, arr[i]-2*k);
maxe = max(arr[i-1], arr[n-1]-2*k);
diff = min(diff, maxe-mine);
}
return diff;
}
};
class Solution:
def getMinDiff(self, arr, n, k):
# code here
arr.sort()
res = arr[-1]-arr[0]
for i in range(1, n):
if arr[i]>=k:
# at a time we can increase or decrease one number only.
# Hence assuming we decrease ith elem, we will increase i-1 th elem.
# using this we basically find which is new_min and new_max possible
# and if the difference is smaller than res, we return the same.
new_min = min(arr[0]+k, arr[i]-k)
new_max = max(arr[-1]-k, arr[i-1]+k)
res = min(res, new_max-new_min)
return res
Related
Given an array of N numbers (not necessarily sorted). We can merge any two numbers into one and the cost of merging the two numbers is equal to the sum of the two values. The task is to find the total minimum cost of merging all the numbers.
Example:
Let the array A = [1,2,3,4]
Then, we can remove 1 and 2, add both of them and keep the sum back in array. Cost of this step would be (1+2) = 3.
Now, A = [3,3,4], Cost = 3
In second step, we can 3 and 3, add both of them and keep the sum back in array. Cost of this step would be (3+3) = 6.
Now, A = [4,6], Cost = 6
In third step, we can remove both elements from the array and keep the sum back in array again. Cost of this step would be (4+6) = 6.
Now, A = [10], Cost = 10
So, total cost turns out to be 19 (10+6+3).
We will have to pick the 2 smallest elements to minimize our total cost. A simple way to do this is using a min heap structure. We will be able to get the minimum element in O(1) and insertion will be O(log n).
The time complexity of this approach is O(n log n).
But I tried another approach, and wasn't able to find the cases where it fails. The basic idea was that the sum of two smallest elements that we will choose at any time will always be greater than the sum of the pair of elements chosen before. So the "temp" array will always be sorted, and we will be able to access the minimum elements in O(1).
As I am sorting the input array and then simply traversing the array, the complexity of my approach is O(n log n).
int minCost(vector<int>& arr) {
sort(arr.begin(), arr.end());
// temp array will contain the sum of all the pairs of minimum elements
vector<int> temp;
// index for arr
int i = 0;
// index for temp
int j = 0;
int cost = 0;
// while we have more than 1 element combined in both the input and temp array
while(arr.size() - i + temp.size() - j > 1) {
int num1, num2;
// selecting num1 (minimum element)
if(i < arr.size() && j < temp.size()) {
if(arr[i] <= temp[j])
num1 = arr[i++];
else
num1 = temp[j++];
}
else if(i < arr.size())
num1 = arr[i++];
else if(j < temp.size())
num1 = temp[j++];
// selecting num2 (second minimum element)
if(i < arr.size() && j < temp.size()) {
if(arr[i] <= temp[j])
num2 = arr[i++];
else
num2 = temp[j++];
}
else if(i < arr.size())
num2 = arr[i++];
else if(j < temp.size())
num2 = temp[j++];
// appending the sum of the minimum elements in the temp array
int sum = num1 + num2;
temp.push_back(sum);
cost += sum;
}
return cost;
}
Is this approach correct? If not, please let me know what I am missing, and the test cases in which this algorithm fails.
SPOJ Link for the same problem
The logic seems very solid to me... all the computed sums will never be decreasing and therefore you only need to add up either oldest two computed sums, next two elements or oldest sum and next element.
I would just simplify the code:
#include <vector>
#include <algorithm>
#include <stdio.h>
int hsum(std::vector<int> arr) {
int ni = arr.size(), nj = 0, i = 0, j = 0, res = 0;
std::sort(arr.begin(), arr.end());
std::vector<int> temp;
auto get = [&]()->int {
if (j == nj || (i < ni && arr[i] < temp[j])) return arr[i++];
return temp[j++];
};
while ((ni-i)+(nj-j)>1) {
int a = get(), b = get();
res += a+b;
temp.push_back(a + b); nj++;
}
return res;
}
int main() {
fprintf(stderr, "%i\n", hsum(std::vector<int>{1,4,2,3}));
return 0;
}
Very nice idea!
Another improvement is noting that the cumulative length of the two arrays being processed (the original one and the temporary one holding the sums) will decrease at every step.
Since the first step will use two input elements, the fact that the temporary array grows one element at each step will still not be enough for a "walking queue" allocated in the array itself to reach the reading pointer.
This means that there is no need of a temporary array and the space for the sums can be found in the array itself...
int hsum(std::vector<int> arr) {
int ni = arr.size(), nj = 0, i = 0, j = 0, res = 0;
std::sort(arr.begin(), arr.end());
auto get = [&]()->int {
if (j == nj || (i < ni && arr[i] < arr[j])) return arr[i++];
return arr[j++];
};
while ((ni-i)+(nj-j)>1) {
int a = get(), b = get();
res += a+b;
arr[nj++] = a + b;
}
return res;
}
About the error on SPOJ... I tried briefly to search for the problem but I didn't succeed. I tried however generating random arrays of random lengths and checking this solution with what finds a "brute-force" one implemented directly from the specs and I'm reasonably confident that the algorithm is correct.
I know at least one programming arena (Topcoder) where sometimes the problems are carefully crafted so that the computation gives correct results if using unsigned but not if using int (or if using unsigned long long but not if using long long) because of integer overflow.
I don't know if SPOJ also does this kind of nonsense(1)... may be that is the reason some hidden test case fails...
EDIT
Checking with SPOJ the algorithm passes if using long long values... this is the entry I used:
#include <stdio.h>
#include <algorithm>
#include <vector>
int main(int argc, const char *argv[]) {
int n;
scanf("%i", &n);
for (int testcase=0; testcase<n; testcase++) {
int sz; scanf("%i", &sz);
std::vector<long long> arr(sz);
for (int i=0; i<sz; i++) scanf("%lli", &arr[i]);
int ni = arr.size(), nj = 0, i = 0, j = 0;
long long res = 0;
std::sort(arr.begin(), arr.end());
auto get = [&]() -> long long {
if (j == nj || (i < ni && arr[i] < arr[j])) return arr[i++];
return arr[j++];
};
while ((ni-i)+(nj-j)>1) {
long long a = get(), b = get();
res += a+b;
arr[nj++] = a + b;
}
printf("%lli\n", res);
}
return 0;
}
PS: This very kind of computation is also what is needed to build an Huffman tree for entropy coding given the symbols frequency table and thus it's not a mere random exercise but it has practical applications.
(1) I'm saying "nonsense" because in Topcoder they never give problems that require 65 bits; thus it's not a genuine care about overflows, but just setting traps for novices.
Another that I think is a bad practice I saw on TC is that some problems are carefully designed so that the correct algorithm if using C++ will barely fit in the timeout limit: just use another language (and get e.g. a 2× slowdown) and you cannot solve the problem.
First of all, think simple!
When using a priority queue, the problem is easy!
In the first test case :
1 6 3 20
// after pushing to Q
1 3 6 20
// and sum two top items and pop and push!
(1 + 3) 6 20 cost = 4
(4 + 6) 20 cost = 10 + 4
(10 + 20) cost = 30 + 14
30 cost = 44
#include<iostream>
#include<queue>
using namespace std;
int main()
{
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
priority_queue<long long int, vector<long long int>, greater<long long int>> q;
for (int i = 0; i < n; ++i) {
int k;
cin >> k;
q.push(k);
}
long long int sum = 0;
while (q.size() > 1) {
long long int a = q.top();
q.pop();
long long int b = q.top();
q.pop();
q.push(a + b);
sum += a + b;
}
cout << sum << "\n";
}
}
Basically we need to sort the list in desc order and then find its cost like this.
A.sort(reverse=True)
cost = 0
for i in range(len(A)):
cost += A[i] * (i+1)
return cost
Problem statement: Given a set of n coins of some denominations (maybe repeating, in random order), and a number k. A game is being played by a single player in the following manner: Player can choose to pick 0 to k coins contiguously but will have to leave one next coin from picking. In this manner give the highest sum of coins he/she can collect.
Input:
First line contains 2 space-separated integers n and x respectively, which denote
n - Size of the array
x - Window size
Output:
A single integer denoting the max sum the player can obtain.
Working Soln Link: Ideone
long long solve(int n, int x) {
if (n == 0) return 0;
long long total = accumulate(arr + 1, arr + n + 1, 0ll);
if (x >= n) return total;
multiset<long long> dp_x;
for (int i = 1; i <= x + 1; i++) {
dp[i] = arr[i];
dp_x.insert(dp[i]);
}
for (int i = x + 2; i <= n; i++) {
dp[i] = arr[i] + *dp_x.begin();
dp_x.erase(dp_x.find(dp[i - x - 1]));
dp_x.insert(dp[i]);
}
long long ans = total;
for (int i = n - x; i <= n; i++) {
ans = min(ans, dp[i]);
}
return total - ans;
}
Can someone kindly explain how this code is working i.e., how line no. 12-26 in the Ideone solution is producing the correct answer?
I have dry run the code using pen and paper and found that it's giving the correct answer but couldn't figure out the algorithm used(if any). Can someone kindly explain to me how Line No. 12-26 is producing the correct answer? Is there any technique or algorithm at use here?
I am new to DP, so if someone can point out a tutorial(YouTube video, etc) related to this kind of problem, that would be great too. Thank you.
It looks like the idea is converting the problem - You must choose at least one coin in no more than x+1 coins in a row, and make it minimal. Then the original problem's answer would just be [sum of all values] - [answer of the new problem].
Then we're ready to talk about dynamic programming. Let's define a recurrence relation for f(i) which means "the partial answer of the new problem considering 1st to i-th coins, and i-th coin is chosen". (Sorry about the bad description, edits welcome)
f(i) = a(i) : if (i<=x+1)
f(i) = a(i) + min(f(i-1),f(i-2),...,f(i-x-1)) : otherwise
where a(i) is the i-th coin value
I added some comments line by line.
// NOTE f() is dp[] and a() is arr[]
long long solve(int n, int x) {
if (n == 0) return 0;
long long total = accumulate(arr + 1, arr + n + 1, 0ll); // get the sum
if (x >= n) return total;
multiset<long long> dp_x; // A min-heap (with fast random access)
for (int i = 1; i <= x + 1; i++) { // For 1 to (x+1)th,
dp[i] = arr[i]; // f(i) = a(i)
dp_x.insert(dp[i]); // Push the value to the heap
}
for (int i = x + 2; i <= n; i++) { // For the rest,
dp[i] = arr[i] + *dp_x.begin(); // f(i) = a(i) + min(...)
dp_x.erase(dp_x.find(dp[i - x - 1])); // Erase the oldest one from the heap
dp_x.insert(dp[i]); // Push the value to the heap, so it keeps the latest x+1 elements
}
long long ans = total;
for (int i = n - x; i <= n; i++) { // Find minimum of dp[] (among candidate answers)
ans = min(ans, dp[i]);
}
return total - ans;
}
Please also note that multiset is used as a min-heap. However we also need quick random-access(to erase the old ones) and multiset can do it in logarithmic time. So, the overall time complexity is O(n log x).
I have a progression "a", where the first two numbers are given (a1 and a2) and every next number is the smallest sum of subarray which is bigger than the previous number.
For example if i have a1 = 2 and a2 = 3, so the progression will be
2, 3, 5(=2+3), 8(=3+5), 10(=2+3+5), 13(=5+8), 16(=3+5+8),
18(=2+3+5+8=8+10), 23(=5+8+10=10+13), 26(=3+5+8+10), 28(=2+3+5+8+10), 29(=13+16)...
I need to find the Nth number in this progression. ( Time limit is 0.7 seconds)
(a1 is smaller than a2, a2 is smaller than 1000 and N is smaller than 100000)
I tried priority queue, set, map, https://www.geeksforgeeks.org/find-subarray-with-given-sum/ and some other things.
I though that the priority queue would work, but it exceeds the memory limit (256 MB), so i am pretty much hopeless.
Here's what is performing the best at the moment.
int main(){
int a1, a2, n;
cin>>a1>>a2>>n;
priority_queue< int,vector<int>,greater<int> > pq;
pq.push(a1+a2);
int a[n+1];//contains sum of the progression
a[0]=0;
a[1]=a1;
a[2]=a1+a2;
for(int i=3;i<=n;i++){
while(pq.top()<=a[i-1]-a[i-2])
pq.pop();
a[i]=pq.top()+a[i-1];
pq.pop();
for(int j=1; j<i && a[i]-a[j-1]>a[i]-a[i-1] ;j++)
pq.push(a[i]-a[j-1]);
}
cout<<a[n]-a[n-1];
}
I've been trying to solve this for the last 4 days without any success.
Sorry for the bad english, i am only 14 and not from an english speaking coutry.
SOLUTION (Big thanks to n.m. and גלעד ברקן)
V1 (n.m.'s solution)
using namespace std;
struct sliding_window{
int start_pos;
int end_pos;
int sum;
sliding_window(int new_start_pos,int new_end_pos,int new_sum){
start_pos=new_start_pos;
end_pos=new_end_pos;
sum=new_sum;
}
};
class Compare{
public:
bool operator() (sliding_window &lhs, sliding_window &rhs){
return (lhs.sum>rhs.sum);
}
};
int main(){
int a1, a2, n;
//input
cin>>a1>>a2>>n;
int a[n+1];
a[0]=a1;
a[1]=a2;
queue<sliding_window> leftOut;
priority_queue< sliding_window, vector<sliding_window>, Compare> pq;
//add the first two sliding window positions that will expand with time
pq.push(sliding_window(0,0,a1));
pq.push(sliding_window(1,1,a2));
for(int i=2;i<n;i++){
int target=a[i-1]+1;
//expand the sliding window with the smalest sum
while(pq.top().sum<target){
sliding_window temp = pq.top();
pq.pop();
//if the window can't be expanded, it is added to leftOut queue
if(temp.end_pos+1<i){
temp.end_pos++;
temp.sum+=a[temp.end_pos];
pq.push(temp);
}else{
leftOut.push(temp);
}
}
a[i]=pq.top().sum;
//add the removed sliding windows and new sliding window in to the queue
pq.push(sliding_window(i,i,a[i]));
while(leftOut.empty()==false){
pq.push(leftOut.front());
leftOut.pop();
}
}
//print out the result
cout<<a[n-1];
}
V2 (גלעד ברקן's solution)
int find_index(int target, int ps[], int ptrs[], int n){
int cur=ps[ptrs[n]]-ps[0];
while(cur<target){
ptrs[n]++;
cur=ps[ptrs[n]]-ps[0];
}
return ptrs[n];
}
int find_window(int d, int min, int ps[], int ptrs[]){
int cur=ps[ptrs[d]+d-1]-ps[ptrs[d]-1];
while(cur<=min){
ptrs[d]++;
cur=ps[ptrs[d]+d-1]-ps[ptrs[d]-1];
}
return ptrs[d];
}
int main(void){
int a1, a2, n, i;
int args = scanf("%d %d %d",&a1, &a2, &n);
if (args != 3)
printf("Failed to read input.\n");
int a[n];
a[0]=a1;
a[1]=a2;
int ps[n+1];
ps[0]=0;
ps[1]=a[0];
ps[2]=a[0]+a[1];
for (i=3; i<n+1; i++)
ps[i] = 1000000;
int ptrs[n+1];
for(i=0;i<n+1;i++)
ptrs[i]=1;
for(i=2;i<n;i++){
int target=a[i-1]+1;
int max_len=find_index(target,ps, ptrs, n);
int cur=ps[max_len]-ps[0];
int best=cur;
for(int d=max_len-1;d>1;d--){
int l=find_window(d, a[i-1], ps, ptrs);
int cur=ps[l+d-1]-ps[l-1];
if(cur==target){
best=cur;
break;
}
if(cur>a[i-1]&&cur<best)
best=cur;
}
a[i]=best;
ps[i+1]=a[i]+ps[i];
}
printf("%d",a[n-1]);
}
Your priority queue is too big, you can get away with a much smaller one.
Have a priority queue of subarrays represenred e.g. by triples (lowerIndex, upperIndex, sum), keyed by the sum. Given array A of size N, for each index i from 0 to N-2, there is exactly one subarray in the queue with lowerIndex==i. Its sum is the minimal possible sum greater than the last element.
At each step of the algorithm:
Add the sum from the first element of the queue as the new element of A.
Update the first queue element (and all others with the same sum) by extending its upperIndex and updating sum, so it's greater than the new last element.
Add a new subarray of two elements with indices (N-2, N-1) to the queue.
The complexity is a bit hard to analyse because of the duplicate sums in p.2 above, but I guess there shouldn't be too many of those.
It might be enough to try each relevant subarray length to find the next element. If we binary search on each length for the optimal window, we can have an O(n * log(n) * sqrt(n)) solution.
But we can do better by observing that each subarray length has a low bound index that constantly increases as n does. If we keep a pointer to the lowest index for each subarray length and simply iterate upwards each time, we are guaranteed each pointer will increase at most n times. Since there are O(sqrt n) pointers, we have O(n * sqrt n) total iterations.
A rough draft of the pointer idea follows.
UPDATE
For an actual submission, the find_index function was converted to another increasing pointer for speed. (Submission here, username "turnerware"; C code here.)
let n = 100000
let A = new Array(n)
A[0] = 2
A[1] = 3
let ps = new Array(n + 1)
ps[0] = 0
ps[1] = A[0]
ps[2] = A[0] + A[1]
let ptrs = new Array(n + 1).fill(1)
function find_index(target, ps){
let low = 0
let high = ps.length
while (low != high){
let mid = (high + low) >> 1
let cur = ps[mid] - ps[0]
if (cur <= target)
low = mid + 1
else
high = mid
}
return low
}
function find_window(d, min, ps){
let cur = ps[ptrs[d] + d - 1] - ps[ptrs[d] - 1]
while (cur <= min){
ptrs[d]++
cur = ps[ptrs[d] + d - 1] - ps[ptrs[d] - 1]
}
return ptrs[d]
}
let start = +new Date()
for (let i=2; i<n; i++){
let target = A[i-1] + 1
let max_len = find_index(target, ps)
let cur = ps[max_len] - ps[0]
let best = cur
for (let d=max_len - 1; d>1; d--){
let l = find_window(d, A[i-1], ps)
let cur = ps[l + d - 1] - ps[l - 1]
if (cur == target){
best = cur
break
}
if (cur > A[i-1] && cur < best)
best = cur
}
A[i] = best
ps[i + 1] = A[i] + ps[i]
}
console.log(A[n - 1])
console.log(`${ (new Date - start) / 1000 } seconds`)
Just for fun and reference, this prints the sequence and possible indexed intervals corresponding to the element:
let A = [2, 3]
let n = 200
let is = [[-1], [-1]]
let ps = [A[0], A[0] + A[1]]
ps[-1] = 0
for (let i=2; i<n + 1; i++){
let prev = A[i-1]
let best = Infinity
let idxs
for (let j=0; j<i; j++){
for (let k=-1; k<j; k++){
let c = ps[j] - ps[k]
if (c > prev && c < best){
best = c
idxs = [[k+1,j]]
} else if (c == best)
idxs.push([k+1,j])
}
}
A[i] = best
is.push(idxs)
ps[i] = A[i] + ps[i-1]
}
let str = ''
A.map((x, i) => {
str += `${i}, ${x}, ${JSON.stringify(is[i])}\n`
})
console.log(str)
Looks like a sliding window problem to me.
#include <bits/stdc++.h>
using namespace std;
int main(int argc, char** argv) {
if(argc != 4) {
cout<<"Usage: "<<argv[0]<<" a0 a1 n"<<endl;
exit(-1);
}
int a0 = stoi(argv[1]);
int a1 = stoi(argv[2]);
int n = stoi(argv[3]);
int a[n]; // Create an array of length n
a[0] = a0; // Initialize first element
a[1] = a1; // Initialize second element
for(int i=2; i<n; i++) { // Build array up to nth element
int start = i-2; // Pointer to left edge of "window"
int end = i-1; // Pointer to right edge of "window"
int last = a[i-1]; // Last num calculated
int minSum = INT_MAX; // Var to hold min of sum found
int curSum = a[start] + a[end]; // Sum of all numbers in the window
while(start >= 0) { // Left edge is still inside array
// If current sum is greater than the last number calculated
// than it is a possible candidate for being next in sequence
if(curSum > last) {
if(curSum < minSum) {
// Found a smaller valid sum
minSum = curSum;
}
// Slide right edge of the window to the left
// from window to try to get a smaller sum.
// Decrement curSum by the value of removed element
curSum -= a[end];
end--;
}
else {
// Slide left edge of window to the left
start--;
if(!(start < 0)) {
// Increment curSum by the newly enclosed number
curSum += a[start];
}
}
}
// Add the min sum found to the end of the array.
a[i] = minSum;
}
// Print out the nth element of the array
cout<<a[n-1]<<endl;
return 0;
}
You have a number of stones with known weights w1, …, wn. Write a program that will rearrange the stones into two piles such that weight difference between two piles was minimal.
I have dp algorithm:
int max(int a, int b){
return a >= b ? a : b;
}
int diff(int* weights, int number_elements, int capacity){
int **ptrarray = new int* [capacity + 1];
for (int count = 0; count <= capacity; count++) {
ptrarray[count] = new int [number_elements + 1];
}
for (int i = 0; i <= number_elements; ++i){
ptrarray[0][i] = 0;
}
for (int i = 0; i <= capacity; ++i){
ptrarray[i][0] = 0;
}
for (int i = 1; i <= number_elements; ++i){
for (int j = 1; j <= capacity; ++j){
if(weights[i - 1] <= j){
ptrarray[j][i] = max(ptrarray[j][i - 1], ptrarray[j - weights[i - 1]][i-1] + weights[i - 1]);
} else{
ptrarray[j][i] = ptrarray[j][i - 1];
}
}
}
return ptrarray[capacity][number_elements];
}
int main(){
int capacity;
int number_elements;
cin >> number_elements;
int* weights = new int[number_elements];
int sum = 0;
int first_half;
for (int i = 0; i < number_elements; ++i){
cin >> weights[i];
sum+=weights[i];
}
first_half = sum / 2;
int after;
after = diff(weights, number_elements, first_half);
cout << sum - 2*after;
return 0;
}
But it's a little bit naive. It demand too much memory, and I need some hints to simplify it. Is there a more effective approach?
You can reduce the memory usage by making the following observations:
Your code uses only at most two layers of the ptrarray array at any time.
If you iterate from the largest to the smallest index in each layer, you can rewrite the previous layer. This way you'll need only one array.
Here is a pseudo code with this optimization:
max_weight = new int[max_capacity + 1](false)
max_weight[0] = true
for weight in weights:
for capacity in [max_capacity ... weight]:
max_weight[capacity] = max(max_weight[capacity], max_weight[capacity - weight] + weight
It requires O(max_capacity) memory (instead of O(max_capacity * number_of_items)).
A couple of more optimizations: you can use a boolean array (to indicate whether the sum i is reachable) and choose the largest reachable sum in the end instead of storing the largest sum less than or equal to i. Moreover, you can use an std::bitset instead of a boolean array to get an O(max_capacity * num_items / world_len) time complexity (where world_len is the size of the largest integer type that the machine can perform logical operations on). Adding one weight would look like reachable |= (reachable << weight).
So the final version looks like this:
reachable = bitset(max_capacity + 1)
reachable[0] = true
for weight in weights:
reachable |= reachable << weight
return highest set bit of reachable
The code becomes much simpler and more efficient this way (the time complexity is technically the same, but it's much faster in practice).
There's one caveat here: you need to know the size of std::bitset at compile time, so if it's not possible, you'll need a different bitset implementation.
I run the following code which is Insertion Sort algorithm that use binary search to find the right position of the item being inserted instead of linear search but there are two numbers in the results not sorted correctly!
#include <iostream>
using namespace std;
void insertion_sort (int a[], int n /* the size of array */)
{
int i, temp,j;
for (i = 1; i < n; i++)
{
/* Assume items before a[i] are sorted. */
/* Pick an number */
temp = a[i];
/* Do binary search to find out the
point where b is to be inserted. */
int low = 0, high = i - 1, k;
while (high-low>1)
{
int mid = (high + low) / 2;
if (temp <= a[mid])
high = mid;
else
low = mid;
}
/* Shift items between high and i by 1 */
for (k = i; k > high; k--)
a[k] = a[k - 1];
a[high] = temp;
}
}
int main()
{
int A[15]={9,5,98,2,5,4,66,12,8,54,0,11,99,55,13};
insertion_sort(A,15);
for (int i=0; i<15; i++)
cout<<A[i]<<endl;
system("pause");
return 0;
}
Output:
Why?
#include <iostream>
using namespace std;
void insertion_sort (int a[], int n /* the size of array */)
{
int i, temp,j;
for (i = 1; i < n; i++)
{
/* Assume items before a[i] are sorted. */
/* Pick an number */
temp = a[i];
/* Do binary search to find out the
point where b is to be inserted. */
The high bound should be above the range and exclusive of the search, because you may need to insert at the end, i.e. do nothing.
// int low = 0, high = i - 1, k;
int low = 0, high = i, k;
Here the condition should be low < high, not low + 1 < high
// while (high-low>1)
while (low < high)
{
int mid = (high + low) / 2;
if (temp <= a[mid])
high = mid;
else
Once you have a[mid] strictly greater than temp, the lowest possible position to insert is mid + 1.
// low = mid;
low = mid + 1
}
/* Shift items between high and i by 1 */
for (k = i; k > high; k--)
a[k] = a[k - 1];
a[high] = temp;
}
}
int main()
{
int A[15]={9,5,98,2,5,4,66,12,8,54,0,11,99,55,13};
insertion_sort(A,15);
for (int i=0; i<15; i++)
cout<<A[i]<<endl;
system("pause");
return 0;
}
A few things to be noticed here:
Binary search does not give you anything, as you need to shift all elements to make space anyway. So it actually increases the overall cost of your algorithm (though not asymptotically).
As this is C++ there is no need to declare k anywhere before the for loop in which it is used (just use for(int k; ...)).
Analyse your algorithm's beginning: i=0 -> low = high = 0. So your while loop does not execute. Then, no matter if the element should be moved or not, your for (k) loop swaps elements 0 and 1. This is error number 1.
Second iteration of i: the while loop does not execute once again, as low = 0 and high = 1, and once again no matter what you swap at least elements 1 and 2. Error number 2.
Now notice that every next iteration will, no matter what, move the element which was initially at index 0 (in your test code it is =9) further and further, to the last index.
So you may see just after checking firts two iterations of for(i) loop that the assumption that elements before a[i] are sorted is wrong, and therefore the algorithm is wrong as well.
Easiest possible fix: initialize low and high as int low = -1, high = i;. What you wanted to do was to find indices low and high such that all elements from 0 to low are < a[i] and all elements from high to i-1 are ≥ a[i]. Your initialization didn't work since it didn't capture the corner cases when all elements a[0], ..., a[i-1] are greater than a[i] and the corner case when all these elements were less than a[i].