I have worked out a O(n square) solution to the problem. I was wondering about a better solution to this. (this is not a homework/interview problem but something I do out of my own interest, hence sharing here):
If a=1, b=2, c=3,….z=26. Given a string, find all possible codes that string
can generate. example: "1123" shall give:
aabc //a = 1, a = 1, b = 2, c = 3
kbc // since k is 11, b = 2, c= 3
alc // a = 1, l = 12, c = 3
aaw // a= 1, a =1, w= 23
kw // k = 11, w = 23
Here is my code to the problem:
void alpha(int* a, int sz, vector<vector<int>>& strings) {
for (int i = sz - 1; i >= 0; i--) {
if (i == sz - 1) {
vector<int> t;
t.push_back(a[i]);
strings.push_back(t);
} else {
int k = strings.size();
for (int j = 0; j < k; j++) {
vector<int> t = strings[j];
strings[j].insert(strings[j].begin(), a[i]);
if (t[0] < 10) {
int n = a[i] * 10 + t[0];
if (n <= 26) {
t[0] = n;
strings.push_back(t);
}
}
}
}
}
}
Essentially the vector strings will hold the sets of numbers.
This would run in n square. I am trying my head around at least an nlogn solution.
Intuitively tree should help here, but not getting anywhere post that.
Generally, your problem complexity is more like 2^n, not n^2, since your k can increase with every iteration.
This is an alternative recursive solution (note: recursion is bad for very long codes). I didn't focus on optimization, since I'm not up to date with C++X, but I think the recursive solution could be optimized with some moves.
Recursion also makes the complexity a bit more obvious compared to the iterative solution.
// Add the front element to each trailing code sequence. Create a new sequence if none exists
void update_helper(int front, std::vector<std::deque<int>>& intermediate)
{
if (intermediate.empty())
{
intermediate.push_back(std::deque<int>());
}
for (size_t i = 0; i < intermediate.size(); i++)
{
intermediate[i].push_front(front);
}
}
std::vector<std::deque<int>> decode(int digits[], int count)
{
if (count <= 0)
{
return std::vector<std::deque<int>>();
}
std::vector<std::deque<int>> result1 = decode(digits + 1, count - 1);
update_helper(*digits, result1);
if (count > 1 && (digits[0] * 10 + digits[1]) <= 26)
{
std::vector<std::deque<int>> result2 = decode(digits + 2, count - 2);
update_helper(digits[0] * 10 + digits[1], result2);
result1.insert(result1.end(), result2.begin(), result2.end());
}
return result1;
}
Call:
std::vector<std::deque<int>> strings = decode(codes, size);
Edit:
Regarding the complexity of the original code, I'll try to show what would happen in the worst case scenario, where the code sequence consists only of 1 and 2 values.
void alpha(int* a, int sz, vector<vector<int>>& strings)
{
for (int i = sz - 1;
i >= 0;
i--)
{
if (i == sz - 1)
{
vector<int> t;
t.push_back(a[i]);
strings.push_back(t); // strings.size+1
} // if summary: O(1), ignoring capacity change, strings.size+1
else
{
int k = strings.size();
for (int j = 0; j < k; j++)
{
vector<int> t = strings[j]; // O(strings[j].size) vector copy operation
strings[j].insert(strings[j].begin(), a[i]); // strings[j].size+1
// note: strings[j].insert treated as O(1) because other containers could do better than vector
if (t[0] < 10)
{
int n = a[i] * 10 + t[0];
if (n <= 26)
{
t[0] = n;
strings.push_back(t); // strings.size+1
// O(1), ignoring capacity change and copy operation
} // if summary: O(1), strings.size+1
} // if summary: O(1), ignoring capacity change, strings.size+1
} // for summary: O(k * strings[j].size), strings.size+k, strings[j].size+1
} // else summary: O(k * strings[j].size), strings.size+k, strings[j].size+1
} // for summary: O(sum[i from 1 to sz] of (k * strings[j].size))
// k (same as string.size) doubles each iteration => k ends near 2^sz
// string[j].size increases by 1 each iteration
// k * strings[j].size increases by ?? each iteration (its getting huge)
}
Maybe I made a mistake somewhere and if we want to play nice we can treat a vector copy as O(1) instead of O(n) in order to reduce complexity, but the hard fact remains, that the worst case is doubling outer vector size in each iteration (at least every 2nd iteration, considering the exact structure of the if conditions) of the inner loop and the inner loop depends on that growing vector size, which makes the whole story at least O(2^n).
Edit2:
I figured out the result complexity (the best hypothetical algoritm still needs to create every element of the result, so result complexity is like a lower bound to what any algorithm can archieve)
Its actually following the Fibonacci numbers:
For worst case input (like only 1s) of size N+2 you have:
size N has k(N) elements
size N+1 has k(N+1) elements
size N+2 is the combination of codes starting with a followed by the combinations from size N+1 (a takes one element of the source) and the codes starting with k, followed by the combinations from size N (k takes two elements of the source)
size N+2 has k(N) + k(N+1) elements
Starting with size 1 => 1 (a) and size 2 => 2 (aa or k)
Result: still exponential growth ;)
Edit3:
Worked out a dynamic programming solution, somewhat similar to your approach with reverse iteration over the code array and kindof optimized in its vector usage, based on the properties explained in Edit2.
The inner loop (update_helper) is still dominated by the count of results (worst case Fibonacci) and a few outer loop iterations will have a decent count of sub-results, but at least the sub-results are reduced to a pointer to some intermediate node, so copying should be pretty efficient. As a little bonus, I switched the result from numbers to characters.
Another edit: updated code with range 0 - 25 as 'a' - 'z', fixed some errors that led to wrong results.
struct const_node
{
const_node(char content, const_node* next)
: next(next), content(content)
{
}
const_node* const next;
const char content;
};
// put front in front of each existing sub-result
void update_helper(int front, std::vector<const_node*>& intermediate)
{
for (size_t i = 0; i < intermediate.size(); i++)
{
intermediate[i] = new const_node(front + 'a', intermediate[i]);
}
if (intermediate.empty())
{
intermediate.push_back(new const_node(front + 'a', NULL));
}
}
std::vector<const_node*> decode_it(int digits[9], size_t count)
{
int current = 0;
std::vector<const_node*> intermediates[3];
for (size_t i = 0; i < count; i++)
{
current = (current + 1) % 3;
int prev = (current + 2) % 3; // -1
int prevprev = (current + 1) % 3; // -2
size_t index = count - i - 1; // invert direction
// copy from prev
intermediates[current] = intermediates[prev];
// update current (part 1)
update_helper(digits[index], intermediates[current]);
if (index + 1 < count && digits[index] &&
digits[index] * 10 + digits[index + 1] < 26)
{
// update prevprev
update_helper(digits[index] * 10 + digits[index + 1], intermediates[prevprev]);
// add to current (part 2)
intermediates[current].insert(intermediates[current].end(), intermediates[prevprev].begin(), intermediates[prevprev].end());
}
}
return intermediates[current];
}
void cleanupDelete(std::vector<const_node*>& nodes);
int main()
{
int code[] = { 1, 2, 3, 1, 2, 3, 1, 2, 3 };
int size = sizeof(code) / sizeof(int);
std::vector<const_node*> result = decode_it(code, size);
// output
for (size_t i = 0; i < result.size(); i++)
{
std::cout.width(3);
std::cout.flags(std::ios::right);
std::cout << i << ": ";
const_node* item = result[i];
while (item)
{
std::cout << item->content;
item = item->next;
}
std::cout << std::endl;
}
cleanupDelete(result);
}
void fillCleanup(const_node* n, std::set<const_node*>& all_nodes)
{
if (n)
{
all_nodes.insert(n);
fillCleanup(n->next, all_nodes);
}
}
void cleanupDelete(std::vector<const_node*>& nodes)
{
// this is like multiple inverse trees, hard to delete correctly, since multiple next pointers refer to the same target
std::set<const_node*> all_nodes;
for each (auto var in nodes)
{
fillCleanup(var, all_nodes);
}
nodes.clear();
for each (auto var in all_nodes)
{
delete var;
}
all_nodes.clear();
}
A drawback of the dynamically reused structure is the cleanup, since you wanna be careful to delete each node only once.
Related
I'm trying to write a program whose input is an array of integers, and its size. This code has to delete each element which is smaller than the element to the left. We want to find number of times that we can process the array this way, until we can no longer delete any more elements.
The contents of the array after we return are unimportant - only the return value is of interest.
For example: given the array [10, 9, 7, 8, 6, 5, 3, 4, 2, 1], the function should return 2, because:
[10,9,7,8,6,5,3,4,2,1] → [10,8,4] → [10]
For example: given the array [1,2,3,4], the function should return 0, because
No element is larger than the element to its right
I want each element to remove the right element if it is more than its right element. We get a smaller array. Then we repeat this operation again. Until we get to an array in which no element can delete another element. I want to calculate the number of steps performed.
int Mafia(int n, vector <int> input_array)
{
int ptr = n;
int last_ptr = n;
int night_Count = 0;
do
{
last_ptr = ptr;
ptr = 1;
for (int i = 1; i < last_ptr; i++)
{
if (input_array[i] >= input_array[i - 1])
{
input_array[ptr++] = input_array[i];
}
}
night_Count++;
} while (last_ptr > ptr);
return night_Count - 1;
}
My code works but I want it to be faster.
Do you have any idea to make this code faster, or another way that is faster than this?
Here is a O(NlogN) solution.
The idea is to iterate over the array and keep tracking candidateKillers which could kill unvisited numbers. Then we find the killer for the current number by using binary search and update the maximum iterations if needed.
Since we iterate over the array which has N numbers and apply log(N) binary search for each number, the overall time complexity is O(NlogN).
Alogrithm
If the current number is greater or equal than the number before it, it could be a killer for numbers after it.
For each killer, we keep tracking its index idx, the number of it num and the iterations needed to reach that killer iters.
The numbers in the candidateKillers by its nature are non-increasing (see next point). Therefore we can apply binary search to find the killer of the current number, which is the one that is a) the closest to the current number b) greater than the current number. This is implemented in searchKiller.
If the current number will be killed by a number in candidateKillers with killerPos, then all candidate killers after killerPos are outdated, because those outdated killers will be killed before the numbers after the current number reach them. If the current number is greater than all candidateKillers, then all the candidateKillers can be discarded.
When we find the killer of the current number, we increase the iters of the killer by one. Because from now on, one more iteration is needed to reach that killer where the current number need to be killed first.
class Solution {
public:
int countIterations(vector<int>& array) {
if (array.size() <= 1) {
return 0;
}
int ans = 0;
vector<Killer> candidateKillers = {Killer(0, array[0], 1)};
for (auto i = 1; i < array.size(); i++) {
int curNum = array[i];
int killerPos = searchKiller(candidateKillers, curNum);
if (killerPos == -1) {
// current one is the largest so far and all candidateKillers before are outdated
candidateKillers = {Killer(i, curNum, 1)};
continue;
}
// get rid of outdated killers
int popCount = candidateKillers.size() - 1 - killerPos;
for (auto j = 0; j < popCount; j++) {
candidateKillers.pop_back();
}
Killer killer = candidateKillers[killerPos];
ans = max(killer.iters, ans);
if (curNum < array[i-1]) {
// since the killer of the current one may not even be in the list e.g., if current is 4 in [6,5,4]
if (killer.idx == i - 1) {
candidateKillers[killerPos].iters += 1;
}
} else {
candidateKillers[killerPos].iters += 1;
candidateKillers.push_back(Killer(i, curNum, 1));
}
}
return ans;
}
private:
struct Killer {
Killer(int idx, int num, int iters)
: idx(idx), num(num), iters(iters) {};
int idx;
int num;
int iters;
};
int searchKiller(vector<Killer>& candidateKillers, int n) {
int lo = 0;
int hi = candidateKillers.size() - 1;
if (candidateKillers[0].num < n) {
return -1;
}
int ans = -1;
while (lo <= hi) {
int mid = lo + (hi - lo) / 2;
if (candidateKillers[mid].num > n) {
ans = mid;
lo = mid + 1;
} else {
hi = mid - 1;
}
}
return ans;
}
};
int main() {
vector<int> array1 = {10, 9, 7, 8, 6, 5, 3, 4, 2, 1};
vector<int> array2 = {1, 2, 3, 4};
vector<int> array3 = {4, 2, 1, 2, 3, 3};
cout << Solution().countIterations(array1) << endl; // 2
cout << Solution().countIterations(array2) << endl; // 0
cout << Solution().countIterations(array3) << endl; // 4
}
You can iterate in reverse, keeping two iterators or indices and moving elements in place. You don't need to allocate a new vector or even resize existing vector. Also a minor, but can replace recursion with loop or write the code the way compiler likely to do it.
This approach is still O(n^2) worst case but it would be faster in run time.
Given heights of n towers and a value k. We need to either increase or decrease height of every tower by k (only once) where k > 0. The task is to minimize the difference between the heights of the longest and the shortest tower after modifications, and output this difference.
I get the intuition behind the solution but I can not comment on the correctness of the solution below.
// C++ program to find the minimum possible
// difference between maximum and minimum
// elements when we have to add/subtract
// every number by k
#include <bits/stdc++.h>
using namespace std;
// Modifies the array by subtracting/adding
// k to every element such that the difference
// between maximum and minimum is minimized
int getMinDiff(int arr[], int n, int k)
{
if (n == 1)
return 0;
// Sort all elements
sort(arr, arr+n);
// Initialize result
int ans = arr[n-1] - arr[0];
// Handle corner elements
int small = arr[0] + k;
int big = arr[n-1] - k;
if (small > big)
swap(small, big);
// Traverse middle elements
for (int i = 1; i < n-1; i ++)
{
int subtract = arr[i] - k;
int add = arr[i] + k;
// If both subtraction and addition
// do not change diff
if (subtract >= small || add <= big)
continue;
// Either subtraction causes a smaller
// number or addition causes a greater
// number. Update small or big using
// greedy approach (If big - subtract
// causes smaller diff, update small
// Else update big)
if (big - subtract <= add - small)
small = subtract;
else
big = add;
}
return min(ans, big - small);
}
// Driver function to test the above function
int main()
{
int arr[] = {4, 6};
int n = sizeof(arr)/sizeof(arr[0]);
int k = 10;
cout << "\nMaximum difference is "
<< getMinDiff(arr, n, k);
return 0;
}
Can anyone help me provide the correct solution to this problem?
The codes above work, however I don't find much explanation so I'll try to add some in order to help develop intuition.
For any given tower, you have two choices, you can either increase its height or decrease it.
Now if you decide to increase its height from say Hi to Hi + K, then you can also increase the height of all shorter towers as that won't affect the maximum. Similarly, if you decide to decrease the height of a tower from Hi to Hi − K, then you can also decrease the heights of all taller towers.
We will make use of this, we have n buildings, and we'll try to make each of the building the highest and see making which building the highest gives us the least range of heights(which is our answer). Let me explain:
So what we want to do is - 1) We first sort the array(you will soon see why).
2) Then for every building from i = 0 to n-2[1] , we try to make it the highest (by adding K to the building, adding K to the buildings on its left and subtracting K from the buildings on its right).
So say we're at building Hi, we've added K to it and the buildings before it and subtracted K from the buildings after it. So the minimum height of the buildings will now be min(H0 + K, Hi+1 - K), i.e. min(1st building + K, next building on right - K).
(Note: This is because we sorted the array. Convince yourself by taking a few examples.)
Likewise, the maximum height of the buildings will be max(Hi + K, Hn-1 - K), i.e. max(current building + K, last building on right - K).
3) max - min gives you the range.
[1]Note that when i = n-1. In this case, there is no building after the current building, so we're adding K to every building, so the range will merely be
height[n-1] - height[0] since K is added to everything, so it cancels out.
Here's a Java implementation based on the idea above:
class Solution {
int getMinDiff(int[] arr, int n, int k) {
Arrays.sort(arr);
int ans = arr[n-1] - arr[0];
int smallest = arr[0] + k, largest = arr[n-1]-k;
for(int i = 0; i < n-1; i++){
int min = Math.min(smallest, arr[i+1]-k);
int max = Math.max(largest, arr[i]+k);
if (min < 0) continue;
ans = Math.min(ans, max-min);
}
return ans;
}
}
int getMinDiff(int a[], int n, int k) {
sort(a,a+n);
int i,mx,mn,ans;
ans = a[n-1]-a[0]; // this can be one possible solution
for(i=0;i<n;i++)
{
if(a[i]>=k) // since height of tower can't be -ve so taking only +ve heights
{
mn = min(a[0]+k, a[i]-k);
mx = max(a[n-1]-k, a[i-1]+k);
ans = min(ans, mx-mn);
}
}
return ans;
}
This is C++ code, it passed all the test cases.
This python code might be of some help to you. Code is self explanatory.
def getMinDiff(arr, n, k):
arr = sorted(arr)
ans = arr[-1]-arr[0] #this case occurs when either we subtract k or add k to all elements of the array
for i in range(n):
mn=min(arr[0]+k, arr[i]-k) #after sorting, arr[0] is minimum. so adding k pushes it towards maximum. We subtract k from arr[i] to get any other worse (smaller) minimum. worse means increasing the diff b/w mn and mx
mx=max(arr[n-1]-k, arr[i]+k) # after sorting, arr[n-1] is maximum. so subtracting k pushes it towards minimum. We add k to arr[i] to get any other worse (bigger) maximum. worse means increasing the diff b/w mn and mx
ans = min(ans, mx-mn)
return ans
Here's a solution:-
But before jumping on to the solution, here's some info that is required to understand it. In the best case scenario, the minimum difference would be zero. This could happen only in two cases - (1) the array contain duplicates or (2) for an element, lets say 'x', there exists another element in the array which has the value 'x + 2*k'.
The idea is pretty simple.
First we would sort the array.
Next, we will try to find either the optimum value (for which the answer would come out to be zero) or at least the closest number to the optimum value using Binary Search
Here's a Javascript implementation of the algorithm:-
function minDiffTower(arr, k) {
arr = arr.sort((a,b) => a-b);
let minDiff = Infinity;
let prev = null;
for (let i=0; i<arr.length; i++) {
let el = arr[i];
// Handling case when the array have duplicates
if (el == prev) {
minDiff = 0;
break;
}
prev = el;
let targetNum = el + 2*k; // Lets say we have an element 10. The difference would be zero when there exists an element with value 10+2*k (this is the 'optimum value' as discussed in the explaination
let closestMatchDiff = Infinity; // It's not necessary that there would exist 'targetNum' in the array, so we try to find the closest to this number using Binary Search
let lb = i+1;
let ub = arr.length-1;
while (lb<=ub) {
let mid = lb + ((ub-lb)>>1);
let currMidDiff = arr[mid] > targetNum ? arr[mid] - targetNum : targetNum - arr[mid];
closestMatchDiff = Math.min(closestMatchDiff, currMidDiff);
if (arr[mid] == targetNum) break; // in this case the answer would be simply zero, no need to proceed further
else if (arr[mid] < targetNum) lb = mid+1;
else ub = mid-1;
}
minDiff = Math.min(minDiff, closestMatchDiff);
}
return minDiff;
}
Here is the C++ code, I have continued from where you left. The code is self-explanatory.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int minDiff(int arr[], int n, int k)
{
// If the array has only one element.
if (n == 1)
{
return 0;
}
//sort all elements
sort(arr, arr + n);
//initialise result
int ans = arr[n - 1] - arr[0];
//Handle corner elements
int small = arr[0] + k;
int big = arr[n - 1] - k;
if (small > big)
{
// Swap the elements to keep the array sorted.
int temp = small;
small = big;
big = temp;
}
//traverse middle elements
for (int i = 0; i < n - 1; i++)
{
int subtract = arr[i] - k;
int add = arr[i] + k;
// If both subtraction and addition do not change the diff.
// Subtraction does not give new minimum.
// Addition does not give new maximum.
if (subtract >= small or add <= big)
{
continue;
}
// Either subtraction causes a smaller number or addition causes a greater number.
//Update small or big using greedy approach.
// if big-subtract causes smaller diff, update small Else update big
if (big - subtract <= add - small)
{
small = subtract;
}
else
{
big = add;
}
}
return min(ans, big - small);
}
int main(void)
{
int arr[] = {1, 5, 15, 10};
int n = sizeof(arr) / sizeof(arr[0]);
int k = 3;
cout << "\nMaximum difference is: " << minDiff(arr, n, k) << endl;
return 0;
}
class Solution {
public:
int getMinDiff(int arr[], int n, int k) {
sort(arr, arr+n);
int diff = arr[n-1]-arr[0];
int mine, maxe;
for(int i = 0; i < n; i++)
arr[i]+=k;
mine = arr[0];
maxe = arr[n-1]-2*k;
for(int i = n-1; i > 0; i--){
if(arr[i]-2*k < 0)
break;
mine = min(mine, arr[i]-2*k);
maxe = max(arr[i-1], arr[n-1]-2*k);
diff = min(diff, maxe-mine);
}
return diff;
}
};
class Solution:
def getMinDiff(self, arr, n, k):
# code here
arr.sort()
res = arr[-1]-arr[0]
for i in range(1, n):
if arr[i]>=k:
# at a time we can increase or decrease one number only.
# Hence assuming we decrease ith elem, we will increase i-1 th elem.
# using this we basically find which is new_min and new_max possible
# and if the difference is smaller than res, we return the same.
new_min = min(arr[0]+k, arr[i]-k)
new_max = max(arr[-1]-k, arr[i-1]+k)
res = min(res, new_max-new_min)
return res
I have a progression "a", where the first two numbers are given (a1 and a2) and every next number is the smallest sum of subarray which is bigger than the previous number.
For example if i have a1 = 2 and a2 = 3, so the progression will be
2, 3, 5(=2+3), 8(=3+5), 10(=2+3+5), 13(=5+8), 16(=3+5+8),
18(=2+3+5+8=8+10), 23(=5+8+10=10+13), 26(=3+5+8+10), 28(=2+3+5+8+10), 29(=13+16)...
I need to find the Nth number in this progression. ( Time limit is 0.7 seconds)
(a1 is smaller than a2, a2 is smaller than 1000 and N is smaller than 100000)
I tried priority queue, set, map, https://www.geeksforgeeks.org/find-subarray-with-given-sum/ and some other things.
I though that the priority queue would work, but it exceeds the memory limit (256 MB), so i am pretty much hopeless.
Here's what is performing the best at the moment.
int main(){
int a1, a2, n;
cin>>a1>>a2>>n;
priority_queue< int,vector<int>,greater<int> > pq;
pq.push(a1+a2);
int a[n+1];//contains sum of the progression
a[0]=0;
a[1]=a1;
a[2]=a1+a2;
for(int i=3;i<=n;i++){
while(pq.top()<=a[i-1]-a[i-2])
pq.pop();
a[i]=pq.top()+a[i-1];
pq.pop();
for(int j=1; j<i && a[i]-a[j-1]>a[i]-a[i-1] ;j++)
pq.push(a[i]-a[j-1]);
}
cout<<a[n]-a[n-1];
}
I've been trying to solve this for the last 4 days without any success.
Sorry for the bad english, i am only 14 and not from an english speaking coutry.
SOLUTION (Big thanks to n.m. and גלעד ברקן)
V1 (n.m.'s solution)
using namespace std;
struct sliding_window{
int start_pos;
int end_pos;
int sum;
sliding_window(int new_start_pos,int new_end_pos,int new_sum){
start_pos=new_start_pos;
end_pos=new_end_pos;
sum=new_sum;
}
};
class Compare{
public:
bool operator() (sliding_window &lhs, sliding_window &rhs){
return (lhs.sum>rhs.sum);
}
};
int main(){
int a1, a2, n;
//input
cin>>a1>>a2>>n;
int a[n+1];
a[0]=a1;
a[1]=a2;
queue<sliding_window> leftOut;
priority_queue< sliding_window, vector<sliding_window>, Compare> pq;
//add the first two sliding window positions that will expand with time
pq.push(sliding_window(0,0,a1));
pq.push(sliding_window(1,1,a2));
for(int i=2;i<n;i++){
int target=a[i-1]+1;
//expand the sliding window with the smalest sum
while(pq.top().sum<target){
sliding_window temp = pq.top();
pq.pop();
//if the window can't be expanded, it is added to leftOut queue
if(temp.end_pos+1<i){
temp.end_pos++;
temp.sum+=a[temp.end_pos];
pq.push(temp);
}else{
leftOut.push(temp);
}
}
a[i]=pq.top().sum;
//add the removed sliding windows and new sliding window in to the queue
pq.push(sliding_window(i,i,a[i]));
while(leftOut.empty()==false){
pq.push(leftOut.front());
leftOut.pop();
}
}
//print out the result
cout<<a[n-1];
}
V2 (גלעד ברקן's solution)
int find_index(int target, int ps[], int ptrs[], int n){
int cur=ps[ptrs[n]]-ps[0];
while(cur<target){
ptrs[n]++;
cur=ps[ptrs[n]]-ps[0];
}
return ptrs[n];
}
int find_window(int d, int min, int ps[], int ptrs[]){
int cur=ps[ptrs[d]+d-1]-ps[ptrs[d]-1];
while(cur<=min){
ptrs[d]++;
cur=ps[ptrs[d]+d-1]-ps[ptrs[d]-1];
}
return ptrs[d];
}
int main(void){
int a1, a2, n, i;
int args = scanf("%d %d %d",&a1, &a2, &n);
if (args != 3)
printf("Failed to read input.\n");
int a[n];
a[0]=a1;
a[1]=a2;
int ps[n+1];
ps[0]=0;
ps[1]=a[0];
ps[2]=a[0]+a[1];
for (i=3; i<n+1; i++)
ps[i] = 1000000;
int ptrs[n+1];
for(i=0;i<n+1;i++)
ptrs[i]=1;
for(i=2;i<n;i++){
int target=a[i-1]+1;
int max_len=find_index(target,ps, ptrs, n);
int cur=ps[max_len]-ps[0];
int best=cur;
for(int d=max_len-1;d>1;d--){
int l=find_window(d, a[i-1], ps, ptrs);
int cur=ps[l+d-1]-ps[l-1];
if(cur==target){
best=cur;
break;
}
if(cur>a[i-1]&&cur<best)
best=cur;
}
a[i]=best;
ps[i+1]=a[i]+ps[i];
}
printf("%d",a[n-1]);
}
Your priority queue is too big, you can get away with a much smaller one.
Have a priority queue of subarrays represenred e.g. by triples (lowerIndex, upperIndex, sum), keyed by the sum. Given array A of size N, for each index i from 0 to N-2, there is exactly one subarray in the queue with lowerIndex==i. Its sum is the minimal possible sum greater than the last element.
At each step of the algorithm:
Add the sum from the first element of the queue as the new element of A.
Update the first queue element (and all others with the same sum) by extending its upperIndex and updating sum, so it's greater than the new last element.
Add a new subarray of two elements with indices (N-2, N-1) to the queue.
The complexity is a bit hard to analyse because of the duplicate sums in p.2 above, but I guess there shouldn't be too many of those.
It might be enough to try each relevant subarray length to find the next element. If we binary search on each length for the optimal window, we can have an O(n * log(n) * sqrt(n)) solution.
But we can do better by observing that each subarray length has a low bound index that constantly increases as n does. If we keep a pointer to the lowest index for each subarray length and simply iterate upwards each time, we are guaranteed each pointer will increase at most n times. Since there are O(sqrt n) pointers, we have O(n * sqrt n) total iterations.
A rough draft of the pointer idea follows.
UPDATE
For an actual submission, the find_index function was converted to another increasing pointer for speed. (Submission here, username "turnerware"; C code here.)
let n = 100000
let A = new Array(n)
A[0] = 2
A[1] = 3
let ps = new Array(n + 1)
ps[0] = 0
ps[1] = A[0]
ps[2] = A[0] + A[1]
let ptrs = new Array(n + 1).fill(1)
function find_index(target, ps){
let low = 0
let high = ps.length
while (low != high){
let mid = (high + low) >> 1
let cur = ps[mid] - ps[0]
if (cur <= target)
low = mid + 1
else
high = mid
}
return low
}
function find_window(d, min, ps){
let cur = ps[ptrs[d] + d - 1] - ps[ptrs[d] - 1]
while (cur <= min){
ptrs[d]++
cur = ps[ptrs[d] + d - 1] - ps[ptrs[d] - 1]
}
return ptrs[d]
}
let start = +new Date()
for (let i=2; i<n; i++){
let target = A[i-1] + 1
let max_len = find_index(target, ps)
let cur = ps[max_len] - ps[0]
let best = cur
for (let d=max_len - 1; d>1; d--){
let l = find_window(d, A[i-1], ps)
let cur = ps[l + d - 1] - ps[l - 1]
if (cur == target){
best = cur
break
}
if (cur > A[i-1] && cur < best)
best = cur
}
A[i] = best
ps[i + 1] = A[i] + ps[i]
}
console.log(A[n - 1])
console.log(`${ (new Date - start) / 1000 } seconds`)
Just for fun and reference, this prints the sequence and possible indexed intervals corresponding to the element:
let A = [2, 3]
let n = 200
let is = [[-1], [-1]]
let ps = [A[0], A[0] + A[1]]
ps[-1] = 0
for (let i=2; i<n + 1; i++){
let prev = A[i-1]
let best = Infinity
let idxs
for (let j=0; j<i; j++){
for (let k=-1; k<j; k++){
let c = ps[j] - ps[k]
if (c > prev && c < best){
best = c
idxs = [[k+1,j]]
} else if (c == best)
idxs.push([k+1,j])
}
}
A[i] = best
is.push(idxs)
ps[i] = A[i] + ps[i-1]
}
let str = ''
A.map((x, i) => {
str += `${i}, ${x}, ${JSON.stringify(is[i])}\n`
})
console.log(str)
Looks like a sliding window problem to me.
#include <bits/stdc++.h>
using namespace std;
int main(int argc, char** argv) {
if(argc != 4) {
cout<<"Usage: "<<argv[0]<<" a0 a1 n"<<endl;
exit(-1);
}
int a0 = stoi(argv[1]);
int a1 = stoi(argv[2]);
int n = stoi(argv[3]);
int a[n]; // Create an array of length n
a[0] = a0; // Initialize first element
a[1] = a1; // Initialize second element
for(int i=2; i<n; i++) { // Build array up to nth element
int start = i-2; // Pointer to left edge of "window"
int end = i-1; // Pointer to right edge of "window"
int last = a[i-1]; // Last num calculated
int minSum = INT_MAX; // Var to hold min of sum found
int curSum = a[start] + a[end]; // Sum of all numbers in the window
while(start >= 0) { // Left edge is still inside array
// If current sum is greater than the last number calculated
// than it is a possible candidate for being next in sequence
if(curSum > last) {
if(curSum < minSum) {
// Found a smaller valid sum
minSum = curSum;
}
// Slide right edge of the window to the left
// from window to try to get a smaller sum.
// Decrement curSum by the value of removed element
curSum -= a[end];
end--;
}
else {
// Slide left edge of window to the left
start--;
if(!(start < 0)) {
// Increment curSum by the newly enclosed number
curSum += a[start];
}
}
}
// Add the min sum found to the end of the array.
a[i] = minSum;
}
// Print out the nth element of the array
cout<<a[n-1]<<endl;
return 0;
}
I'm making a simple program to calculate the number of pairs in an array that are divisible by 3 array length and values are user determined.
Now my code is perfectly fine. However, I just want to check if there is a faster way to calculate it which results in less compiling time?
As the length of the array is 10^4 or less compiler takes less than 100ms. However, as it gets more to 10^5 it spikes up to 1000ms so why is this? and how to improve speed?
#include <iostream>
using namespace std;
int main()
{
int N, i, b;
b = 0;
cin >> N;
unsigned int j = 0;
std::vector<unsigned int> a(N);
for (j = 0; j < N; j++) {
cin >> a[j];
if (j == 0) {
}
else {
for (i = j - 1; i >= 0; i = i - 1) {
if ((a[j] + a[i]) % 3 == 0) {
b++;
}
}
}
}
cout << b;
return 0;
}
Your algorithm has O(N^2) complexity. There is a faster way.
(a[i] + a[j]) % 3 == ((a[i] % 3) + (a[j] % 3)) % 3
Thus, you need not know the exact numbers, you need to know their remainders of division by three only. Zero remainder of the sum can be received with two numbers with zero remainders (0 + 0) and with two numbers with remainders 1 and 2 (1 + 2).
The result will be equal to r[1]*r[2] + r[0]*(r[0]-1)/2 where r[i] is the quantity of numbers with remainder equal to i.
int r[3] = {};
for (int i : a) {
r[i % 3]++;
}
std::cout << r[1]*r[2] + (r[0]*(r[0]-1)) / 2;
The complexity of this algorithm is O(N).
I've encountered this problem before, and while I don't find my particular solution, you could improve running times by hashing.
The code would look something like this:
// A C++ program to check if arr[0..n-1] can be divided
// in pairs such that every pair is divisible by k.
#include <bits/stdc++.h>
using namespace std;
// Returns true if arr[0..n-1] can be divided into pairs
// with sum divisible by k.
bool canPairs(int arr[], int n, int k)
{
// An odd length array cannot be divided into pairs
if (n & 1)
return false;
// Create a frequency array to count occurrences
// of all remainders when divided by k.
map<int, int> freq;
// Count occurrences of all remainders
for (int i = 0; i < n; i++)
freq[arr[i] % k]++;
// Traverse input array and use freq[] to decide
// if given array can be divided in pairs
for (int i = 0; i < n; i++)
{
// Remainder of current element
int rem = arr[i] % k;
// If remainder with current element divides
// k into two halves.
if (2*rem == k)
{
// Then there must be even occurrences of
// such remainder
if (freq[rem] % 2 != 0)
return false;
}
// If remainder is 0, then there must be two
// elements with 0 remainder
else if (rem == 0)
{
if (freq[rem] & 1)
return false;
}
// Else number of occurrences of remainder
// must be equal to number of occurrences of
// k - remainder
else if (freq[rem] != freq[k - rem])
return false;
}
return true;
}
/* Driver program to test above function */
int main()
{
int arr[] = {92, 75, 65, 48, 45, 35};
int k = 10;
int n = sizeof(arr)/sizeof(arr[0]);
canPairs(arr, n, k)? cout << "True": cout << "False";
return 0;
}
That works for a k (in your case 3)
But then again, this is not my code, but the code you can find in the following link. with a proper explanation. I didn't just paste the link since it's bad practice I think.
I have done a test in C++ asking for a function that returns one of the indices that splits the input vector in 2 parts having the same sum of the elements, for eg: for the vec = {1, 2, 3, 5, 4, -1, 1, 1, 2, -1}, it may return 3, because 1+2+3 = 6 = 4-1+1+1+2-1. So I have done the function that returns the correct answer:
int func(const std::vector< int >& vecIn)
{
for (std::size_t p = 0; p < vecin.size(); p++)
{
if (std::accumulator(vecIn.begin(), vecIn.begin() + p, 0) ==
std::accumulator(vecIn.begin() + p + 1, vecIn.end(), 0))
return p;
}
return -1;
}
My problem was when the input was a very long vector containing just 1 (or -1), the return of the function was slow. So I have thought of starting the search for the wanted index from middle, and then go left and right. But the best approach I suppose is the one where the index is in the merge-sort algorithm order, that means: n/2, n/4, 3n/4, n/8, 3n/8, 5n/8, 7n/8... where n is the size of the vector. Is there a way to write this order in a formula, so I can apply it in my function?
Thanks
EDIT
After some comments I have to mention that I had done the test a few days ago, so I have forgot to put and mention the part of no solution: it should return -1... I have updated also the question title.
Specifically for this problem, I would use the following algorithm:
Compute the total sum of the vector. This gives two sums (empty vector, and full vector)
for each element in order, move one element from full to empty, which means adding the value of next element from sum(full) to sum(empty). When the two sums are equal, you have found your index.
This give a o(n) algorithm instead of o(n2)
You can solve the problem much faster without calling std::accumulator at each step:
int func(const std::vector< int >& vecIn)
{
int s1 = 0;
int s2 = std::accumulator(vecIn.begin(), vecIn.end(), 0);
for (std::size_t p = 0; p < vecin.size(); p++)
{
if (s1 == s2)
return p;
s1 += vecIn[p];
s2 -= vecIn[p];
}
}
This is O(n). At each step, s1 will contain the sum of the first p elements, and s2 the sum of the rest. You can update both of them with an addition and a subtraction when moving to the next element.
Since std::accumulator needs to iterate over the range you give it, your algorithm was O(n^2), which is why it was so slow for many elements.
To answer the actual question: Your sequence n/2, n/4, 3n/5, n/8, 3n/8 can be rewritten as
1*n/2
1*n/4 3*n/4
1*n/8 3*n/8 5*n/8 7*n/8
...
that is to say, the denominator runs from i=2 up in powers of 2, and the nominator runs from j=1 to i-1 in steps of 2. However, this is not what you need for your actual problem, because the example you give has n=10. Clearly you don't want n/4 there - your indices have to be integer.
The best solution here is to recurse. Given a range [b,e], pick a value middle (b+e/2) and set the new ranges to [b, (b+e/2)-1] and [(b+e/2)=1, e]. Of course, specialize ranges with length 1 or 2.
Considering MSalters comments, I'm afraid another solution would be better. If you want to use less memory, maybe the selected answer is good enough, but to find the possibly multiple solutions you could use the following code:
static const int arr[] = {5,-10,10,-10,10,1,1,1,1,1};
std::vector<int> vec (arr, arr + sizeof(arr) / sizeof(arr[0]) );
// compute cumulative sum
std::vector<int> cumulative_sum( vec.size() );
cumulative_sum[0] = vec[0];
for ( size_t i = 1; i < vec.size(); i++ )
{ cumulative_sum[i] = cumulative_sum[i-1] + vec[i]; }
const int complete_sum = cumulative_sum.back();
// find multiple solutions, if there are any
const int complete_sum_half = complete_sum / 2; // suggesting this is valid...
std::vector<int>::iterator it = cumulative_sum.begin();
std::vector<int> mid_indices;
do {
it = std::find( it, cumulative_sum.end(), complete_sum_half );
if ( it != cumulative_sum.end() )
{ mid_indices.push_back( it - cumulative_sum.begin() ); ++it; }
} while( it != cumulative_sum.end() );
for ( size_t i = 0; i < mid_indices.size(); i++ )
{ std::cout << mid_indices[i] << std::endl; }
std::cout << "Split behind these indices to obtain two equal halfs." << std::endl;
This way, you get all the possible solutions. If there is no solution to split the vector in two equal halfs, mid_indices will be left empty.
Again, you have to sum up each value only once.
My proposal is this:
static const int arr[] = {1,2,3,5,4,-1,1,1,2,-1};
std::vector<int> vec (arr, arr + sizeof(arr) / sizeof(arr[0]) );
int idx1(0), idx2(vec.size()-1);
int sum1(0), sum2(0);
int idxMid = -1;
do {
// fast access without using the index each time.
const int& val1 = vec[idx1];
const int& val2 = vec[idx2];
// Precompute the next (possible) sum values.
const int nSum1 = sum1 + val1;
const int nSum2 = sum2 + val2;
// move the index considering the balanace between the
// left and right sum.
if ( sum1 - nSum2 < sum2 - nSum1 )
{ sum1 = nSum1; idx1++; }
else
{ sum2 = nSum2; idx2--; }
if ( idx1 >= idx2 ){ idxMid = idx2; }
} while( idxMid < 0 && idx2 >= 0 && idx1 < vec.size() );
std::cout << idxMid << std::endl;
It does add every value only once no matter how many values. Such that it's complexity is only O(n) and not O(n^2).
The code simply runs from left and right simultanuously and moves the indices further if it's side is lower than the other.
You want nth term of the series you mentioned. Then it would be:
numerator: (n - 2^((int)(log2 n)) ) *2 + 1
denominator: 2^((int)(log2 n) + 1)
I came across the same question in Codility tests. There is a similar looking answer above (didn't pass some of the unit tests), but below code segment was successful in tests.
#include <vector>
#include <numeric>
#include <iostream>
using namespace std;
// Returns -1 if equilibrium point is not found
// use long long to support bigger ranges
int FindEquilibriumPoint(vector<long> &values) {
long long lower = 0;
long long upper = std::accumulate(values.begin(), values.end(), 0);
for (std::size_t i = 0; i < values.size(); i++) {
upper -= values[i];
if (lower == upper) {
return i;
}
lower += values[i];
}
return -1;
}
int main() {
vector<long> v = {-1, 3, -4, 5, 1, -6, 2, 1};
cout << "Equilibrium Point:" << FindEquilibriumPoint(v) << endl;
return 0;
}
Output
Equilibrium Point:1
Here it is the algorithm in Javascript:
function equi(arr){
var N = arr.length;
if (N == 0){ return -1};
var suma = 0;
for (var i=0; i<N; i++){
suma += arr[i];
}
var suma_iz = 0;
for(i=0; i<N; i++){
var suma_de = suma - suma_iz - arr[i];
if (suma_iz == suma_de){
return i};
suma_iz += arr[i];
}
return -1;
}
As you see this code satisfy the condition of O(n)