I have been trying this since a long time but the code for the following problem is giving me wrong answer.
Problem Statement:You have three stacks of cylinders where each cylinder has the same diameter, but they may vary in height. You can change the height of a stack by removing and discarding its topmost cylinder any number of times.
Find the maximum possible height of the stacks such that all of the stacks are exactly the same height. This means you must remove zero or more cylinders from the top of zero or more of the three stacks until they're all the same height, then print the height. The removals must be performed in such a way as to maximize the height.
Explanation:
See this image: Explanation
Sample Input
5 3 4
3 2 1 1 1
4 3 2
1 1 4 1
Sample Output
5
My algorithm for this is:
Step I. get the 3 array and reverse them, create a new array out of an existing array with each element is sum of all the previous elements. eg: [3,2,1,1,1] -> [1,1,1,2,3] -> [1,2,3,5,8]
So the 3 new array formed would be [1,2,3,5,8] [2,5,9] [1,5,6,7]
Step II. Take the smallest array traverse the smallest array and search element in the other 2 array - if the element is existing in other 2 array, STOP there and return the number.
Eg. Here I start with element - 2 : Which is not existing in other 2 array. Next I start with element - 5 : it is existing in other 2 array.
My code:
#include<iostream>
using namespace std;
int main()
{
long long h1,h2,h3;
cin>>h1>>h2>>h3;
long long a[h1],b[h2],c[h3];
long long sum1=0,sum2=0,sum3=0;
for(long long i=0;i<h1;i++){
cin>>a[h1-i-1];
}
for(long long i=0;i<h2;i++){
cin>>b[h2-i-1];
}
for(long long i=0;i<h3;i++){
cin>>c[h3-i-1];
}
for(long long i=0;i<h1;i++){
sum1=sum1+a[i];
a[i]=sum1;
}
for(long long i=0;i<h2;i++){
sum2=sum2+b[i];
b[i]=sum2;
}
for(long long i=0;i<h3;i++){
sum3=sum3+c[i];
c[i]=sum3;
}
long long i = 0, j = 0, k = 0;
while (i < h1 && j < h2 && k < h3)
{
if (a[i] == b[j] && b[j] == c[k])
{ cout << a[i] << " "; return 0; }
else if (a[i] < b[j])
i++;
else if (b[j] < c[k])
j++;
else
k++;
}
cout<<0;
return 0;
}
What modification should I perform for the code to run for larger values of input?
This code gives wrong answer for some input.
Please help
You should traverse your array's in while loop from high to low values. Because we need highest value.Hence update your code
long long i = h1-1, j = h2-1, k = h3-1;
while (i >-1 && j > -1 && k >-1)
{
if (a[i] == b[j] && b[j] == c[k])
{ cout << a[i] << " "; return 0; }
else if (a[i] > b[j])
i--;
else if (b[j] > c[k])
j--;
else
k--;
}
Related
Given an array of N numbers (not necessarily sorted). We can merge any two numbers into one and the cost of merging the two numbers is equal to the sum of the two values. The task is to find the total minimum cost of merging all the numbers.
Example:
Let the array A = [1,2,3,4]
Then, we can remove 1 and 2, add both of them and keep the sum back in array. Cost of this step would be (1+2) = 3.
Now, A = [3,3,4], Cost = 3
In second step, we can 3 and 3, add both of them and keep the sum back in array. Cost of this step would be (3+3) = 6.
Now, A = [4,6], Cost = 6
In third step, we can remove both elements from the array and keep the sum back in array again. Cost of this step would be (4+6) = 6.
Now, A = [10], Cost = 10
So, total cost turns out to be 19 (10+6+3).
We will have to pick the 2 smallest elements to minimize our total cost. A simple way to do this is using a min heap structure. We will be able to get the minimum element in O(1) and insertion will be O(log n).
The time complexity of this approach is O(n log n).
But I tried another approach, and wasn't able to find the cases where it fails. The basic idea was that the sum of two smallest elements that we will choose at any time will always be greater than the sum of the pair of elements chosen before. So the "temp" array will always be sorted, and we will be able to access the minimum elements in O(1).
As I am sorting the input array and then simply traversing the array, the complexity of my approach is O(n log n).
int minCost(vector<int>& arr) {
sort(arr.begin(), arr.end());
// temp array will contain the sum of all the pairs of minimum elements
vector<int> temp;
// index for arr
int i = 0;
// index for temp
int j = 0;
int cost = 0;
// while we have more than 1 element combined in both the input and temp array
while(arr.size() - i + temp.size() - j > 1) {
int num1, num2;
// selecting num1 (minimum element)
if(i < arr.size() && j < temp.size()) {
if(arr[i] <= temp[j])
num1 = arr[i++];
else
num1 = temp[j++];
}
else if(i < arr.size())
num1 = arr[i++];
else if(j < temp.size())
num1 = temp[j++];
// selecting num2 (second minimum element)
if(i < arr.size() && j < temp.size()) {
if(arr[i] <= temp[j])
num2 = arr[i++];
else
num2 = temp[j++];
}
else if(i < arr.size())
num2 = arr[i++];
else if(j < temp.size())
num2 = temp[j++];
// appending the sum of the minimum elements in the temp array
int sum = num1 + num2;
temp.push_back(sum);
cost += sum;
}
return cost;
}
Is this approach correct? If not, please let me know what I am missing, and the test cases in which this algorithm fails.
SPOJ Link for the same problem
The logic seems very solid to me... all the computed sums will never be decreasing and therefore you only need to add up either oldest two computed sums, next two elements or oldest sum and next element.
I would just simplify the code:
#include <vector>
#include <algorithm>
#include <stdio.h>
int hsum(std::vector<int> arr) {
int ni = arr.size(), nj = 0, i = 0, j = 0, res = 0;
std::sort(arr.begin(), arr.end());
std::vector<int> temp;
auto get = [&]()->int {
if (j == nj || (i < ni && arr[i] < temp[j])) return arr[i++];
return temp[j++];
};
while ((ni-i)+(nj-j)>1) {
int a = get(), b = get();
res += a+b;
temp.push_back(a + b); nj++;
}
return res;
}
int main() {
fprintf(stderr, "%i\n", hsum(std::vector<int>{1,4,2,3}));
return 0;
}
Very nice idea!
Another improvement is noting that the cumulative length of the two arrays being processed (the original one and the temporary one holding the sums) will decrease at every step.
Since the first step will use two input elements, the fact that the temporary array grows one element at each step will still not be enough for a "walking queue" allocated in the array itself to reach the reading pointer.
This means that there is no need of a temporary array and the space for the sums can be found in the array itself...
int hsum(std::vector<int> arr) {
int ni = arr.size(), nj = 0, i = 0, j = 0, res = 0;
std::sort(arr.begin(), arr.end());
auto get = [&]()->int {
if (j == nj || (i < ni && arr[i] < arr[j])) return arr[i++];
return arr[j++];
};
while ((ni-i)+(nj-j)>1) {
int a = get(), b = get();
res += a+b;
arr[nj++] = a + b;
}
return res;
}
About the error on SPOJ... I tried briefly to search for the problem but I didn't succeed. I tried however generating random arrays of random lengths and checking this solution with what finds a "brute-force" one implemented directly from the specs and I'm reasonably confident that the algorithm is correct.
I know at least one programming arena (Topcoder) where sometimes the problems are carefully crafted so that the computation gives correct results if using unsigned but not if using int (or if using unsigned long long but not if using long long) because of integer overflow.
I don't know if SPOJ also does this kind of nonsense(1)... may be that is the reason some hidden test case fails...
EDIT
Checking with SPOJ the algorithm passes if using long long values... this is the entry I used:
#include <stdio.h>
#include <algorithm>
#include <vector>
int main(int argc, const char *argv[]) {
int n;
scanf("%i", &n);
for (int testcase=0; testcase<n; testcase++) {
int sz; scanf("%i", &sz);
std::vector<long long> arr(sz);
for (int i=0; i<sz; i++) scanf("%lli", &arr[i]);
int ni = arr.size(), nj = 0, i = 0, j = 0;
long long res = 0;
std::sort(arr.begin(), arr.end());
auto get = [&]() -> long long {
if (j == nj || (i < ni && arr[i] < arr[j])) return arr[i++];
return arr[j++];
};
while ((ni-i)+(nj-j)>1) {
long long a = get(), b = get();
res += a+b;
arr[nj++] = a + b;
}
printf("%lli\n", res);
}
return 0;
}
PS: This very kind of computation is also what is needed to build an Huffman tree for entropy coding given the symbols frequency table and thus it's not a mere random exercise but it has practical applications.
(1) I'm saying "nonsense" because in Topcoder they never give problems that require 65 bits; thus it's not a genuine care about overflows, but just setting traps for novices.
Another that I think is a bad practice I saw on TC is that some problems are carefully designed so that the correct algorithm if using C++ will barely fit in the timeout limit: just use another language (and get e.g. a 2× slowdown) and you cannot solve the problem.
First of all, think simple!
When using a priority queue, the problem is easy!
In the first test case :
1 6 3 20
// after pushing to Q
1 3 6 20
// and sum two top items and pop and push!
(1 + 3) 6 20 cost = 4
(4 + 6) 20 cost = 10 + 4
(10 + 20) cost = 30 + 14
30 cost = 44
#include<iostream>
#include<queue>
using namespace std;
int main()
{
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
priority_queue<long long int, vector<long long int>, greater<long long int>> q;
for (int i = 0; i < n; ++i) {
int k;
cin >> k;
q.push(k);
}
long long int sum = 0;
while (q.size() > 1) {
long long int a = q.top();
q.pop();
long long int b = q.top();
q.pop();
q.push(a + b);
sum += a + b;
}
cout << sum << "\n";
}
}
Basically we need to sort the list in desc order and then find its cost like this.
A.sort(reverse=True)
cost = 0
for i in range(len(A)):
cost += A[i] * (i+1)
return cost
I'm making a simple program to calculate the number of pairs in an array that are divisible by 3 array length and values are user determined.
Now my code is perfectly fine. However, I just want to check if there is a faster way to calculate it which results in less compiling time?
As the length of the array is 10^4 or less compiler takes less than 100ms. However, as it gets more to 10^5 it spikes up to 1000ms so why is this? and how to improve speed?
#include <iostream>
using namespace std;
int main()
{
int N, i, b;
b = 0;
cin >> N;
unsigned int j = 0;
std::vector<unsigned int> a(N);
for (j = 0; j < N; j++) {
cin >> a[j];
if (j == 0) {
}
else {
for (i = j - 1; i >= 0; i = i - 1) {
if ((a[j] + a[i]) % 3 == 0) {
b++;
}
}
}
}
cout << b;
return 0;
}
Your algorithm has O(N^2) complexity. There is a faster way.
(a[i] + a[j]) % 3 == ((a[i] % 3) + (a[j] % 3)) % 3
Thus, you need not know the exact numbers, you need to know their remainders of division by three only. Zero remainder of the sum can be received with two numbers with zero remainders (0 + 0) and with two numbers with remainders 1 and 2 (1 + 2).
The result will be equal to r[1]*r[2] + r[0]*(r[0]-1)/2 where r[i] is the quantity of numbers with remainder equal to i.
int r[3] = {};
for (int i : a) {
r[i % 3]++;
}
std::cout << r[1]*r[2] + (r[0]*(r[0]-1)) / 2;
The complexity of this algorithm is O(N).
I've encountered this problem before, and while I don't find my particular solution, you could improve running times by hashing.
The code would look something like this:
// A C++ program to check if arr[0..n-1] can be divided
// in pairs such that every pair is divisible by k.
#include <bits/stdc++.h>
using namespace std;
// Returns true if arr[0..n-1] can be divided into pairs
// with sum divisible by k.
bool canPairs(int arr[], int n, int k)
{
// An odd length array cannot be divided into pairs
if (n & 1)
return false;
// Create a frequency array to count occurrences
// of all remainders when divided by k.
map<int, int> freq;
// Count occurrences of all remainders
for (int i = 0; i < n; i++)
freq[arr[i] % k]++;
// Traverse input array and use freq[] to decide
// if given array can be divided in pairs
for (int i = 0; i < n; i++)
{
// Remainder of current element
int rem = arr[i] % k;
// If remainder with current element divides
// k into two halves.
if (2*rem == k)
{
// Then there must be even occurrences of
// such remainder
if (freq[rem] % 2 != 0)
return false;
}
// If remainder is 0, then there must be two
// elements with 0 remainder
else if (rem == 0)
{
if (freq[rem] & 1)
return false;
}
// Else number of occurrences of remainder
// must be equal to number of occurrences of
// k - remainder
else if (freq[rem] != freq[k - rem])
return false;
}
return true;
}
/* Driver program to test above function */
int main()
{
int arr[] = {92, 75, 65, 48, 45, 35};
int k = 10;
int n = sizeof(arr)/sizeof(arr[0]);
canPairs(arr, n, k)? cout << "True": cout << "False";
return 0;
}
That works for a k (in your case 3)
But then again, this is not my code, but the code you can find in the following link. with a proper explanation. I didn't just paste the link since it's bad practice I think.
I need to find all the prime numbers from 2 to n using the Sieve of Eratosthenes. I looked on Wikipedia(Sieve of Eratosthenes) to find out what the Sieve of Eratosthenes was, and it gave me this pseudocode:
Input: an integer n > 1
Let A be an array of Boolean values, indexed by integers 2 to n,
initially all set to true.
for i = 2, 3, 4, ..., not exceeding √n:
if A[i] is true:
for j = i2, i2+i, i2+2i, i2+3i, ..., not exceeding n :
A[j] := false
Output: all i such that A[i] is true.
So I used this and translated it to C++. It looks fine to me, but I have a couple errors. Firstly, if I input 2 or 3 into n, it says:
terminate called after throwing an instance of 'Range_error'
what(): Range_error: 2
Also, whenever I enter a 100 or anything else (4, 234, 149, 22, anything), it accepts the input for n, and doesn't do anything. Here is my C++ translation:
#include "std_lib_facilities.h"
int main()
{
/* this program will take in an input 'n' as the maximum value. Then it will calculate
all the prime numbers between 2 and n. It follows the Sieve of Eratosthenes with
the algorithms from Wikipedia's pseudocode translated by me into C++*/
int n;
cin >> n;
vector<string>A;
for(int i = 2; i <= n; ++i) // fills the whole table with "true" from 0 to n-2
A.push_back("true");
for(int i = 2; i <= sqrt(n); ++i)
{
i -= 2; // because I built the vector from 0 to n-2, i need to reflect that here.
if(A[i] == "true")
{
for(int j = pow(i, 2); j <= n; j += i)
{
A[j] = "false";
}
}
}
//print the prime numbers
for(int i = 2; i <= n; ++i)
{
if(A[i] == "true")
cout << i << '\n';
}
return 0;
}
The issue is coming from the fact that the indexes are not in line with the value they are representing, i.e., they are moved down by 2. By doing this operation, they no longer have the same mathematical properties.
Basically, the value 3 is at position 1 and the value 4 is at position 2. When you are testing for division, you are using the positions as they were values. So instead of testing if 4%3==0, you are testing that 2%1=0.
In order to make your program works, you have to remove the -2 shifting of the indexes:
int main()
{
int n;
cin >> n;
vector<string>A;
for(int i = 0; i <= n; ++i) // fills the whole table with "true" from 0 to n-2
A.push_back("true");
for(int i = 2; i <= sqrt(n); ++i)
{
if(A[i] == "true")
{
for(int j = pow(i, 2); j <= n; j += i)
{
A[j] = "false";
}
}
}
//print the prime numbers
for(int i = 2; i <= n; ++i)
{
if(A[i] == "true")
cout << i << '\n';
}
return 0;
}
I agree with other comments, you could use a vector of bools. And directly initialize them with the right size and value:
std::vector<bool> A(n, false);
Here you push back n-1 elements
vector<string>A;
for(int i = 2; i <= n; ++i) // fills the whole table with "true" from 0 to n-2
A.push_back("true");
but here you access your vector from A[2] to A[n].
//print the prime numbers
for(int i = 2; i <= n; ++i)
{
if(A[i] == "true")
cout << i << '\n';
}
A has elements at positions A[0] to A[n-2]. You might correct this defect by initializing your vector differently. For example as
vector<string> A(n+1, "true");
This creates a vector A with n+1 strings with default values "true" which can be accessed through A[0] to A[n]. With this your code should run, even if it has more deficits. But I think you learn most if you just try to successfully implement the sieve and then look for (good) alternatives in the internet.
This is painful. Why are you using a string array to store boolean values, and not, let's say, an array of boolean values? Why are you leaving out the first two array elements, forcing you to do some adjustment of all indices? Which you then forget half the time, totally breaking your code? At least you should change this line:
i -= 2; // because I built the vector from 0 to n-2, i need to reflect that here.
to:
i -= 2; // because I left the first two elements out, I that here.
// But only here, doing it everywhere is too annoying.
As a result of that design decision, when you execute this line:
for(int j = pow(i, 2); j <= n; j += i)
i is actually zero which means j will stay zero forever.
Given a sequence of n positive integers we need to count consecutive sub-sequences whose sum is divisible by k.
Constraints : N is up to 10^6 and each element up to 10^9 and K is up to 100
EXAMPLE : Let N=5 and K=3 and array be 1 2 3 4 1
Here answer is 4
Explanation : there exists, 4 sub-sequences whose sum is divisible by 3, they are
3
1 2
1 2 3
2 3 4
My Attempt :
long long int count=0;
for(int i=0;i<n;i++){
long long int sum=0;
for(int j=i;j<n;j++)
{
sum=sum+arr[j];
if(sum%k==0)
{
count++;
}
}
}
But obviously its poor approach. Can their be better approach for this question? Please help.
Complete Question: https://www.hackerrank.com/contests/w6/challenges/consecutive-subsequences
Here is a fast O(n + k) solution:
1)Lets compute prefix sums pref[i](for 0 <= i < n).
2)Now we can compute count[i] - the number of prefixes with sum i modulo k(0 <= i < k).
This can be done by iterating over all the prefixes and making count[pref[i] % k]++.
Initially, count[0] = 1(an empty prefix has sum 0) and 0 for i != 0.
3)The answer is sum count[i] * (count[i] - 1) / 2 for all i.
4)It is better to compute prefix sums modulo k to avoid overflow.
Why does it work? Let's take a closer a look at a subarray divisible by k. Let's say that it starts in L position and ends in R position. It is divisible by k if and only if pref[L - 1] == pref[R] (modulo k) because their differnce is zero modulo k(by definition of divisibility). So for each fixed modulo, we can pick any two prefixes with this prefix sum modulo k(and there are exactly count[i] * (count[i] - 1) / 2 ways to do it).
Here is my code:
long long get_count(const vector<int>& vec, int k) {
//Initialize count array.
vector<int> cnt_mod(k, 0);
cnt_mod[0] = 1;
int pref_sum = 0;
//Iterate over the input sequence.
for (int elem : vec) {
pref_sum += elem;
pref_sum %= k;
cnt_mod[pref_sum]++;
}
//Compute the answer.
long long res = 0;
for (int mod = 0; mod < k; mod++)
res += (long long)cnt_mod[mod] * (cnt_mod[mod] - 1) / 2;
return res;
}
That have to make your calculations easier:
//Now we will move all numbers to [0..K-1]
long long int count=0;
for(int i=0;i<n;i++){
arr[i] = arr[i]%K;
}
//Now we will calculate cout of all shortest subsequences.
long long int sum=0;
int first(0);
std::vector<int> beg;
std::vector<int> end;
for(int i=0;i<n;i++){
if (arr[i] == 0)
{
count++;
continue;
}
sum += arr[i];
if (sum == K)
{
beg.push_back(first);
end.push_back(i);
count++;
}
else
{
while (sum > K)
{
sum -= arr[first];
first++;
}
if (sum == K)
{
beg.push_back(first);
end.push_back(i);
count++;
}
}
}
//this way we found all short subsequences. And we need to calculate all subsequences that consist of some short subsequencies.
int party(0);
for (int i = 0; i < beg.size() - 1; ++i)
{
if (end[i] == beg[i+1])
{
count += party + 1;
party++;
}
else
{
party = 0;
}
}
So, with max array size = 10^6 and max size of rest = 99, you will not have overflow even if you will need to summ all numbers in simple int32.
And time you will spend will be around O(n+n)
Please can any one provide with a better algorithm then trying all the combinations for this problem.
Given an array A of N numbers, find the number of distinct pairs (i,
j) such that j >=i and A[i] = A[j].
First line of the input contains number of test cases T. Each test
case has two lines, first line is the number N, followed by a line
consisting of N integers which are the elements of array A.
For each test case print the number of distinct pairs.
Constraints:
1 <= T <= 10
1 <= N <= 10^6
-10^6 <= A[i] <= 10^6 for 0 <= i < N
I think that first sorting the array then finding frequency of every distinct integer and then adding nC2 of all the frequencies plus adding the length of the string at last. But unfortunately it gives wrong ans for some cases which are not known help. here is the implementation.
code:
#include <iostream>
#include<cstdio>
#include<algorithm>
using namespace std;
long fun(long a) //to find the aC2 for given a
{
if (a == 1) return 0;
return (a * (a - 1)) / 2;
}
int main()
{
long t, i, j, n, tmp = 0;
long long count;
long ar[1000000];
cin >> t;
while (t--)
{
cin >> n;
for (i = 0; i < n; i++)
{
cin >> ar[i];
}
count = 0;
sort(ar, ar + n);
for (i = 0; i < n - 1; i++)
{
if (ar[i] == ar[i + 1])
{
tmp++;
}
else
{
count += fun(tmp + 1);
tmp = 0;
}
}
if (tmp != 0)
{
count += fun(tmp + 1);
}
cout << count + n << "\n";
}
return 0;
}
Keep a count of how many times each number appears in an array. Then iterate over the result array and add the triangular number for each.
For example(from the source test case):
Input:
3
1 2 1
count array = {0, 2, 1} // no zeroes, two ones, one two
pairs = triangle(0) + triangle(2) + triangle(1)
pairs = 0 + 3 + 1
pairs = 4
Triangle numbers can be computed by (n * n + n) / 2, and the whole thing is O(n).
Edit:
First, there's no need to sort if you're counting frequency. I see what you did with sorting, but if you just keep a separate array of frequencies, it's easier. It takes more space, but since the elements and array length are both restrained to < 10^6, the max you'll need is an int[10^6]. This easily fits in the 256MB space requirements given in the challenge. (whoops, since elements can go negative, you'll need an array twice that size. still well under the limit, though)
For the n choose 2 part, the part you had wrong is that it's an n+1 choose 2 problem. Since you can pair each one by itself, you have to add one to n. I know you were adding n at the end, but it's not the same. The difference between tri(n) and tri(n+1) is not one, but n.