The problem is fairly simple. Given an input of N (3 <= N <= 3000) integers, find the largest sum of a 3-integer arithmetic series in the sequence. Eg. (15, 8, 1) is a larger arithmetic series than (12, 7, 2) because 15 + 8 + 1 > 12 + 7 + 2. The integers apart of the largest arithmetic series do NOT have to be adjacent, and the order they appear in is irrelevant.
An example input would be:
6
1 6 11 2 7 12
where the first number is N (in this case, 6) and the second line is the sequence N integers long.
And the output would be the largest sum of any 3-integer arithmetic series. Like so:
21
because 2, 7 and 12 has the largest sum of any 3-integer arithmetic series in the sequence, and 2 + 7 + 12 = 21. It is also guaranteed that a 3-integer arithmetic series exists in the sequence.
EDIT: The numbers that make up the sum (output) have to be an arithmetic series (constant difference) that is 3 integers long. In the case of the sample input, (1 6 11) is a possible arithmetic series, but it is smaller than (2 7 12) because 2 + 7 + 12 > 1 + 6 + 11. Thus 21 would be outputted because it is larger.
Here is my attempt at solving this question in C++:
#include <bits/stdc++.h>
using namespace std;
vector<int> results;
vector<int> middle;
vector<int> diff;
int main(){
int n;
cin >> n;
int sizes[n];
for (int i = 0; i < n; i++){
int size;
cin >> size;
sizes[i] = size;
}
sort(sizes, sizes + n, greater<int>());
for (int i = 0; i < n; i++){
for (int j = i+1; j < n; j++){
int difference = sizes[i] - sizes[j];
diff.insert(diff.end(), difference);
middle.insert(middle.end(), sizes[j]);
}
}
for (size_t i = 0; i < middle.size(); i++){
int difference = middle[i] - diff[i];
for (int j = 0; j < n; j++){
if (sizes[j] == difference) results.insert(results.end(), middle[i]);
}
}
int max = 0;
for (size_t i = 0; i < results.size(); i++) {
if (results[i] > max) max = results[i];
}
int answer = max * 3;
cout << answer;
return 0;
}
My approach was to record what the middle number and the difference was using separate vectors, then loop through the vectors and search if the middle number minus the difference is in the array, where it gets added to another vector. Then the largest middle number is found and multiplied by 3 to get the sum. This approach made my algorithm go from O(n^3) to roughly O(n^2). However, the algorithm doesn't always produce the correct output (and I can't think of a test case where this doesn't work) every time, and since I'm using separate vectors, I get a std::bad_alloc error for large N values because I am probably using too much memory. The time limit in this question is 1.4 sec per test case, and memory limit is 64 MB.
Since N can only be max 3000, O(n^2) is sufficient. So what is an optimal O(n^2) solution (or better) to this problem?
So, a simple solution for this problem is to put all elements into an std::map to count their frequencies, then iterate over the first and second element in the arithmetic progression, then search the map for the third.
Iterating takes O(n^2) and map lookups and find() generally takes O(logn).
include <iostream>
#include <map>
using namespace std;
const int maxn = 3000;
int a[maxn+1];
map<int, int> freq;
int main()
{
int n; cin >> n;
for (int i = 1; i <= n; i++) {cin >> a[i]; freq[a[i]]++;} // inserting frequencies
int maxi = INT_MIN;
for (int i = 1; i <= n-1; i++)
{
for (int j = i+1; j <= n; j++)
{
int first = a[i], sec = a[j]; if (first > sec) {swap(first, sec);} //ensure that first is smaller than sec
int gap = sec - first; //calculating difference
if (gap == 0 && freq[first] >= 3) {maxi = max(maxi, first*3); } //if first = sec then calculate immidiately
else
{
int third1 = first - gap; //else there're two options for the third element
if (freq.find(third1) != freq.end() && gap != 0) {maxi = max(maxi, first + sec + third1); } //finding third element
}
}
}
cout << maxi;
}
Output : 21
Another test :
6
3 4 5 7 7 7
Output : 21
Another test :
5
10 10 9 8 7
Output : 27
You can try std::unordered_map to try and reduce the complexity even more.
Also see Why is "using namespace std;" considered bad practice?
The sum of a 3-element arithmetic progression is 3-times the middle element, so I would search around a middle element, and would start the search from the "upper" end of the "array" (and have it sorted). This way the first hit is the largest one. Also, the actual array would be a frequency-map, so elements are unique, but still track if any element has 3 copies, because that can become a hit (progression by 0).
I think it may be better to create the frequency-map first, and sort it later, simply because it may result in sorting fewer elements - though they are going to be pairs of value and count in this case.
function max3(arr){
let stats=new Map();
for(let value of arr)
stats.set(value,(stats.get(value) || 0)+1);
let array=Array.from(stats); // array of [value,count] arrays
array.sort((x,y)=>y[0]-x[0]); // sort by value, descending
for(let i=0;i<array.length;i++){
let [value,count]=array[i];
if(count>=3)
return 3*value;
for(let j=0;j<i;j++)
if(stats.has(2*value-array[j][0]))
return 3*value;
}
}
console.log(max3([1,6,11,2,7,12])); // original example
console.log(max3([3,4,5,7,7,7])); // an example of 3 identical elements
console.log(max3([10,10,9,8,7])); // an example from another answer
console.log(max3([1,2,11,6,7,12])); // example with non-adjacent elements
console.log(max3([3,7,1,1,1])); // check for finding lowest possible triplet too
Related
Edit: I have sorted array in descending order (a[0] is the largest)
int [n] a; (n_max = 18)
each item in the array has an integer with value range: 0 -> 9.
I want to construct 2 groups of numbers from the item in the array such that:
//group A:
numberA[0] = a[0]*pow(10,0);
numberA[1] = a[0]*pow(10,1) + a[1]*pow(10,0);
numberA[2] = a[0]*pow(10,2) + a[1]*pow(10,1) + a[2]*pow(10,0);
numberA[3] = a[0]*pow(10,3) + a[1]*pow(10,2) + a[2]*pow(10,1) + a[3]*pow(10,0);
numberA[n] = a[0]*pow(10,n-1) +...+ a[n-1]*pow(10,0)
//group B:
numberB[0] = a[0]*pow(10,0);
numberB[1] = a[0]*pow(10,0) + a[1]*pow(10,1);
numberB[2] = a[0]*pow(10,0) + a[1]*pow(10,1) + a[2]*pow(10,2);
numberB[3] = a[0]*pow(10,0) + a[1]*pow(10,1) + a[2]*pow(10,2) + a[3]*pow(10,3);
numberA[n] = a[0]*pow(10,0) +...+ a[n-1]*pow(10,n-1)
For example if I have an array[4] = {9,8,7,6};
I would have the following numbers: numberA[] = {9 ,98 ,987, 9876} and numberB = {6, 76, 876, 9876}
Could anyone educate me with some algorithms using for loop, recursive or anything witty and also the time + space complexity of it ?
My attempt so far:
long long int numberA = 0;
vector<long long int>numberAList;
numberA = num[0]*(pow(10,0));
numberAList.push_back(numberA);
numberA = 0;
for (int i = 0; i < n-1; i++ ){ // complexity: O(logn) & this algorithm is wrong
for (int j = i+1; j<n-1; j++){
numberA += num[i]*pow(10,n-1-j);
cout<<"numberA"<<numberA<<endl;
}
numberAList.push_back(numberA);
}
for (int i = 0; i < (n-2); i++ ){
cout<<numberAList[i]<<" ";
}
After suggestion from user1984. I made the codes:
long long int numberA=0;
vector<long long int> numberAList;
for (int i = 0; i < n-1 ; i++ ){ // complexity O(n)
numberA *= 10;
numberA += num[i];
cout<<i<<", numberA: "<<numberA<<endl;
numberAList.push_back(numberA);
}
for (int i = 0; i < n-1; i++ ){
cout<<numberAList[i]<<" ";
}
long long int numberB = 0;
vector<long long int> numberBList;
for (int i = 0; i < n; i++ ){ // complexity O(n)
int j = i+1;
if (j<n){
for (j ; j<n; j++ ){
numberB += num[j]*(pow(10,n-j-1));
}
numberBList.push_back(numberB);
numberB=0;
}
}
cout<<"numberBlist: "<<endl;
for (int i = 0; i < (n-1); i++ ){
cout<<numberBList[i]<<" ";
}
The code works ok with short numbers; however, with large number, I start to see some weird results, even after I changed the data type to long long int.
Could anyone help?
// if input = 94321
numberBlist:
94321 4321 321 21 1
numberAlist:
9 94 943 9432 94321
// but with input = 999999999999999999
numberBlist:
100000000000000016 10000000000000000 999999999999999 99999999999999 9999999999999 999999999999 99999999999 9999999999 999999999 99999999 9999999 999999 99999 9999 999 99 9
numberAlist:
9 99 999 9999 99999 999999 9999999 99999999 999999999 9999999999 99999999999 999999999999 9999999999999 99999999999999 999999999999999 9999999999999999 99999999999999999
This looks like an assignment, so I'm not going to provide code, but a (hopefully), clear approach that helps you to solve the problem in whatever language you prefer.
Have a variable current that tracks the current value you are at
Have an array res to accumulate the results as you iterate over the input array, nums.
For each nums_i in the array nums, do the following, respecting the order:
current *= 10
current += nums_i
push current onto res
That's it. I think the approach is self explanatory, but if you feel it needs more elaboration, please ask.
Update based on OP's recent edit:
To deal with arbitrary size integers you have 2 options:
Use a library that handles arbitrary sized number arithmetic, like gmplib, and go with an algorithm like the one you currently have or a similar one.
Use the fact that you don't change the base of the number and employ string/char concatenation. If you need to convert it to an integer at the end, you can use a library. Note that these libraries use bit manipulation techniques and have been optimized to work very fast. Read for example this informative article for details.
A note on the test you ran on large numbers: "999999999999999999" is smaller than a 64 bit signed integer, so the current algorithm shouldn't have worked since a "long long" in c++ is guaranteed to be at least 64 bits. I'm not much familiar with c++ but, since not all the parts of the code are shared, look if you have a place where you are storing that large value in some smaller data type that may overflow.
With std::partial_sum, you might do
std::vector<int> nums{9, 8, 7, 6};
std::vector<int> res(nums.size());
std::partial_sum(nums.begin(), nums.end(),
res.begin(),
[](int acc, int n){ return 10 * acc + n; });
Demo
Given an array of N numbers (not necessarily sorted). We can merge any two numbers into one and the cost of merging the two numbers is equal to the sum of the two values. The task is to find the total minimum cost of merging all the numbers.
Example:
Let the array A = [1,2,3,4]
Then, we can remove 1 and 2, add both of them and keep the sum back in array. Cost of this step would be (1+2) = 3.
Now, A = [3,3,4], Cost = 3
In second step, we can 3 and 3, add both of them and keep the sum back in array. Cost of this step would be (3+3) = 6.
Now, A = [4,6], Cost = 6
In third step, we can remove both elements from the array and keep the sum back in array again. Cost of this step would be (4+6) = 6.
Now, A = [10], Cost = 10
So, total cost turns out to be 19 (10+6+3).
We will have to pick the 2 smallest elements to minimize our total cost. A simple way to do this is using a min heap structure. We will be able to get the minimum element in O(1) and insertion will be O(log n).
The time complexity of this approach is O(n log n).
But I tried another approach, and wasn't able to find the cases where it fails. The basic idea was that the sum of two smallest elements that we will choose at any time will always be greater than the sum of the pair of elements chosen before. So the "temp" array will always be sorted, and we will be able to access the minimum elements in O(1).
As I am sorting the input array and then simply traversing the array, the complexity of my approach is O(n log n).
int minCost(vector<int>& arr) {
sort(arr.begin(), arr.end());
// temp array will contain the sum of all the pairs of minimum elements
vector<int> temp;
// index for arr
int i = 0;
// index for temp
int j = 0;
int cost = 0;
// while we have more than 1 element combined in both the input and temp array
while(arr.size() - i + temp.size() - j > 1) {
int num1, num2;
// selecting num1 (minimum element)
if(i < arr.size() && j < temp.size()) {
if(arr[i] <= temp[j])
num1 = arr[i++];
else
num1 = temp[j++];
}
else if(i < arr.size())
num1 = arr[i++];
else if(j < temp.size())
num1 = temp[j++];
// selecting num2 (second minimum element)
if(i < arr.size() && j < temp.size()) {
if(arr[i] <= temp[j])
num2 = arr[i++];
else
num2 = temp[j++];
}
else if(i < arr.size())
num2 = arr[i++];
else if(j < temp.size())
num2 = temp[j++];
// appending the sum of the minimum elements in the temp array
int sum = num1 + num2;
temp.push_back(sum);
cost += sum;
}
return cost;
}
Is this approach correct? If not, please let me know what I am missing, and the test cases in which this algorithm fails.
SPOJ Link for the same problem
The logic seems very solid to me... all the computed sums will never be decreasing and therefore you only need to add up either oldest two computed sums, next two elements or oldest sum and next element.
I would just simplify the code:
#include <vector>
#include <algorithm>
#include <stdio.h>
int hsum(std::vector<int> arr) {
int ni = arr.size(), nj = 0, i = 0, j = 0, res = 0;
std::sort(arr.begin(), arr.end());
std::vector<int> temp;
auto get = [&]()->int {
if (j == nj || (i < ni && arr[i] < temp[j])) return arr[i++];
return temp[j++];
};
while ((ni-i)+(nj-j)>1) {
int a = get(), b = get();
res += a+b;
temp.push_back(a + b); nj++;
}
return res;
}
int main() {
fprintf(stderr, "%i\n", hsum(std::vector<int>{1,4,2,3}));
return 0;
}
Very nice idea!
Another improvement is noting that the cumulative length of the two arrays being processed (the original one and the temporary one holding the sums) will decrease at every step.
Since the first step will use two input elements, the fact that the temporary array grows one element at each step will still not be enough for a "walking queue" allocated in the array itself to reach the reading pointer.
This means that there is no need of a temporary array and the space for the sums can be found in the array itself...
int hsum(std::vector<int> arr) {
int ni = arr.size(), nj = 0, i = 0, j = 0, res = 0;
std::sort(arr.begin(), arr.end());
auto get = [&]()->int {
if (j == nj || (i < ni && arr[i] < arr[j])) return arr[i++];
return arr[j++];
};
while ((ni-i)+(nj-j)>1) {
int a = get(), b = get();
res += a+b;
arr[nj++] = a + b;
}
return res;
}
About the error on SPOJ... I tried briefly to search for the problem but I didn't succeed. I tried however generating random arrays of random lengths and checking this solution with what finds a "brute-force" one implemented directly from the specs and I'm reasonably confident that the algorithm is correct.
I know at least one programming arena (Topcoder) where sometimes the problems are carefully crafted so that the computation gives correct results if using unsigned but not if using int (or if using unsigned long long but not if using long long) because of integer overflow.
I don't know if SPOJ also does this kind of nonsense(1)... may be that is the reason some hidden test case fails...
EDIT
Checking with SPOJ the algorithm passes if using long long values... this is the entry I used:
#include <stdio.h>
#include <algorithm>
#include <vector>
int main(int argc, const char *argv[]) {
int n;
scanf("%i", &n);
for (int testcase=0; testcase<n; testcase++) {
int sz; scanf("%i", &sz);
std::vector<long long> arr(sz);
for (int i=0; i<sz; i++) scanf("%lli", &arr[i]);
int ni = arr.size(), nj = 0, i = 0, j = 0;
long long res = 0;
std::sort(arr.begin(), arr.end());
auto get = [&]() -> long long {
if (j == nj || (i < ni && arr[i] < arr[j])) return arr[i++];
return arr[j++];
};
while ((ni-i)+(nj-j)>1) {
long long a = get(), b = get();
res += a+b;
arr[nj++] = a + b;
}
printf("%lli\n", res);
}
return 0;
}
PS: This very kind of computation is also what is needed to build an Huffman tree for entropy coding given the symbols frequency table and thus it's not a mere random exercise but it has practical applications.
(1) I'm saying "nonsense" because in Topcoder they never give problems that require 65 bits; thus it's not a genuine care about overflows, but just setting traps for novices.
Another that I think is a bad practice I saw on TC is that some problems are carefully designed so that the correct algorithm if using C++ will barely fit in the timeout limit: just use another language (and get e.g. a 2× slowdown) and you cannot solve the problem.
First of all, think simple!
When using a priority queue, the problem is easy!
In the first test case :
1 6 3 20
// after pushing to Q
1 3 6 20
// and sum two top items and pop and push!
(1 + 3) 6 20 cost = 4
(4 + 6) 20 cost = 10 + 4
(10 + 20) cost = 30 + 14
30 cost = 44
#include<iostream>
#include<queue>
using namespace std;
int main()
{
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
priority_queue<long long int, vector<long long int>, greater<long long int>> q;
for (int i = 0; i < n; ++i) {
int k;
cin >> k;
q.push(k);
}
long long int sum = 0;
while (q.size() > 1) {
long long int a = q.top();
q.pop();
long long int b = q.top();
q.pop();
q.push(a + b);
sum += a + b;
}
cout << sum << "\n";
}
}
Basically we need to sort the list in desc order and then find its cost like this.
A.sort(reverse=True)
cost = 0
for i in range(len(A)):
cost += A[i] * (i+1)
return cost
Given an array A, find the highest unique element in array A. Unique element means that element should present only once in the array.
Input:
First line of input contains N, size of array A. Next line contains N
space separated elements of array A.
Output:
Print highest unique number of array A. If there is no any such
element present in array then print -1.
Constraints:
1 ≤ N ≤ 106 0 ≤ Ai ≤ 109
SAMPLE INPUT
5 9 8 8 9 5
SAMPLE OUTPUT
5
Explanation
In array A: 9 occur two times. 8 occur two times. 5 occur once hence the answer is 5.
Can you explain what is wrong with this code?
#include <iostream>
using namespace std;
int main() {
int n;
cin >> n;
int a[n], i, max = -99;
for (i = 0; i < n; i++) {
cin >> a[i];
}
for (i = 0; i < n; i++) {
if (a[i] > max) {
max = a[i];
cout << max;
}
}
for (i = 0; i < n; i++) {
if (max == a[i]) {
break;
} else {
// cout<<"-1";
}
max =
}
return 0;
}
There's a few problems here (right now it won't even compile at max =). But the algorithmic problem is this: the second for loop finds the maximum before rejecting duplicate entries. The reverse is needed. First reject duplicates (say, by setting them to -99), then find the max of what is left over.
As the title says, the task is:
Given number N eliminate K digits to get maximum possible number. The digits must remain at their positions.
Example: n = 12345, k = 3, max = 45 (first three digits eliminated and digits mustn't be moved to another position).
Any idea how to solve this?
(It's not homework, I am preparing for an algorithm contest and solve problems on online judges.)
1 <= N <= 2^60, 1 <= K <= 20.
Edit: Here is my solution. It's working :)
#include <iostream>
#include <string>
#include <queue>
#include <vector>
#include <iomanip>
#include <algorithm>
#include <cmath>
using namespace std;
int main()
{
string n;
int k;
cin >> n >> k;
int b = n.size() - k - 1;
int c = n.size() - b;
int ind = 0;
vector<char> res;
char max = n.at(0);
for (int i=0; i<n.size() && res.size() < n.size()-k; i++) {
max = n.at(i);
ind = i;
for (int j=i; j<i+c; j++) {
if (n.at(j) > max) {
max = n.at(j);
ind = j;
}
}
b--;
c = n.size() - 1 - ind - b;
res.push_back(max);
i = ind;
}
for (int i=0; i<res.size(); i++)
cout << res.at(i);
cout << endl;
return 0;
}
Brute force should be fast enough for your restrictions: n will have max 19 digits. Generate all positive integers with numDigits(n) bits. If k bits are set, then remove the digits at positions corresponding to the set bits. Compare the result with the global optimum and update if needed.
Complexity: O(2^log n * log n). While this may seem like a lot and the same thing as O(n) asymptotically, it's going to be much faster in practice, because the logarithm in O(2^log n * log n) is a base 10 logarithm, which will give a much smaller value (1 + log base 10 of n gives you the number of digits of n).
You can avoid the log n factor by generating combinations of n taken n - k at a time and building the number made up of the chosen n - k positions as you generate each combination (pass it as a parameter). This basically means you solve the similar problem: given n, pick n - k digits in order such that the resulting number is maximum).
Note: there is a method to solve this that does not involve brute force, but I wanted to show the OP this solution as well, since he asked how it could be brute forced in the comments. For the optimal method, investigate what would happen if we built our number digit by digit from left to right, and, for each digit d, we would remove all currently selected digits that are smaller than it. When can we remove them and when can't we?
In the leftmost k+1 digits, find the largest one (let us say it is located at ith location. In case there are multiple occurrences choose the leftmost one). Keep it. Repeat the algorithm for k_new = k-i+1, newNumber = i+1 to n digits of the original number.
Eg. k=5 and number = 7454982641
First k+1 digits: 745498
Best number is 9 and it is located at location i=5.
new_k=1, new number = 82641
First k+1 digits: 82
Best number is 8 and it is located at i=1.
new_k=1, new number = 2641
First k+1 digits: 26
Best number is 6 and it is located at i=2
new_k=0, new number = 41
Answer: 98641
Complexity is O(n) where n is the size of the input number.
Edit: As iVlad mentioned, in the worst case complexity can be quadratic. You can avoid that by maintaining a heap of size at most k+1 which will increase complexity to O(nlogk).
Following may help:
void removeNumb(std::vector<int>& v, int k)
{
if (k == 0) { return; }
if (k >= v.size()) {
v.clear();
return;
}
for (int i = 0; i != v.size() - 1; )
{
if (v[i] < v[i + 1]) {
v.erase(v.begin() + i);
if (--k == 0) { return; }
i = std::max(i - 1, 0);
} else {
++i;
}
}
v.resize(v.size() - k);
}
Please can any one provide with a better algorithm then trying all the combinations for this problem.
Given an array A of N numbers, find the number of distinct pairs (i,
j) such that j >=i and A[i] = A[j].
First line of the input contains number of test cases T. Each test
case has two lines, first line is the number N, followed by a line
consisting of N integers which are the elements of array A.
For each test case print the number of distinct pairs.
Constraints:
1 <= T <= 10
1 <= N <= 10^6
-10^6 <= A[i] <= 10^6 for 0 <= i < N
I think that first sorting the array then finding frequency of every distinct integer and then adding nC2 of all the frequencies plus adding the length of the string at last. But unfortunately it gives wrong ans for some cases which are not known help. here is the implementation.
code:
#include <iostream>
#include<cstdio>
#include<algorithm>
using namespace std;
long fun(long a) //to find the aC2 for given a
{
if (a == 1) return 0;
return (a * (a - 1)) / 2;
}
int main()
{
long t, i, j, n, tmp = 0;
long long count;
long ar[1000000];
cin >> t;
while (t--)
{
cin >> n;
for (i = 0; i < n; i++)
{
cin >> ar[i];
}
count = 0;
sort(ar, ar + n);
for (i = 0; i < n - 1; i++)
{
if (ar[i] == ar[i + 1])
{
tmp++;
}
else
{
count += fun(tmp + 1);
tmp = 0;
}
}
if (tmp != 0)
{
count += fun(tmp + 1);
}
cout << count + n << "\n";
}
return 0;
}
Keep a count of how many times each number appears in an array. Then iterate over the result array and add the triangular number for each.
For example(from the source test case):
Input:
3
1 2 1
count array = {0, 2, 1} // no zeroes, two ones, one two
pairs = triangle(0) + triangle(2) + triangle(1)
pairs = 0 + 3 + 1
pairs = 4
Triangle numbers can be computed by (n * n + n) / 2, and the whole thing is O(n).
Edit:
First, there's no need to sort if you're counting frequency. I see what you did with sorting, but if you just keep a separate array of frequencies, it's easier. It takes more space, but since the elements and array length are both restrained to < 10^6, the max you'll need is an int[10^6]. This easily fits in the 256MB space requirements given in the challenge. (whoops, since elements can go negative, you'll need an array twice that size. still well under the limit, though)
For the n choose 2 part, the part you had wrong is that it's an n+1 choose 2 problem. Since you can pair each one by itself, you have to add one to n. I know you were adding n at the end, but it's not the same. The difference between tri(n) and tri(n+1) is not one, but n.