So, I was doing a question that asked us to divide an array into two parts such that the difference between the sum of elements of both of the parts shall be minimum.
Say A = [3 2 7 4 1]. So, minimum difference is generated when [2 3 4] and [7 1] are the two parts, i.e. difference = (2+3+4)-(7+1) = 1.
My approach was pretty naive, which basically computed all different subsets of the given array, and calculate the absolute difference with its complementary array, and report the minimum of these values.
When I used int my program it gave the correct answers for all but two test cases. In these test cases, the inputs were exceeding the limits of int. So, I changed it to long long, but this gave very weird results. It even started giving wrong results for the previously correct results.
CORRECT OUTPUT GIVING CODE (using int):
#include <bits/stdc++.h>
typedef long long ll;
using namespace std;
int min_diff = INT_MAX;
void subsetGen(vector <int> &curr,vector <int> &v, int n, int index, int sum)
{
if (!curr.empty())
{
int sum_1 = accumulate(curr.begin(), curr.end(), 0);
int diff = abs(2*sum_1 - sum);
min_diff = (diff<min_diff) ? diff : min_diff;
}
for (int i = index; i < v.size(); i++)
{
curr.push_back(v[i]);
subsetGen (curr, v, n, i+1, sum);
curr.pop_back(); // Backtracking
}
return;
}
int main()
{
int n, sum = 0;
cin >> n;
vector <int> v (n, 0);
for (int i=0; i<n; i++)
{
cin >> v[i];
sum += v[i];
}
vector <int> curr;
subsetGen(curr,v,n,0,sum);
cout << min_diff;
return 0;
}
INCORRECT OUTPUT GIVING CODE (using long long):
#include <bits/stdc++.h>
typedef long long ll;
using namespace std;
ll min_diff = LLONG_MAX;
void subsetGen(vector <ll> &curr,vector <ll> &v, int n, int index, ll sum)
{
if (!curr.empty())
{
ll sum_1 = accumulate(curr.begin(), curr.end(), 0);
ll diff = abs(2*sum_1 - sum);
min_diff = (diff<min_diff) ? diff : min_diff;
}
for (int i = index; i < v.size(); i++)
{
curr.push_back(v[i]);
subsetGen (curr, v, n, i+1, sum);
curr.pop_back(); // Backtracking
}
return;
}
int main()
{
int n;
ll sum = 0;
cin >> n;
vector <ll> v (n, 0);
for (int i=0; i<n; i++)
{
cin >> v[i];
sum += v[i];
}
vector <ll> curr;
subsetGen(curr,v,n,0,sum);
cout << min_diff;
return 0;
}
This was the Input I was checking for:
20
452747515 202201476 845758891 733204504 327861300 368456549 64252070 494676885 21095634 611030397 913689714 849191653 173901982 954566440 40404105 228316310 210730656 631709598 847867437 85805975
The correct answer is: 4881 (which the program using int gave)
But using long long is giving me: 4762526359 (which is the wrong answer).
I tested these code in online compilers to see if this was a problem with only my system, but encountered the same problem.
Related
I solved this problem from codeforces: https://codeforces.com/problemset/problem/1471/B. But when I upload it it says memory limit exceeded. How can I reduce the memory usage? I used C++ for the problem. The problem was the following: "You have given an array a of length n and an integer x to a brand new robot. What the robot does is the following: it iterates over the elements of the array, let the current element be q. If q is divisible by x, the robot adds x copies of the integer qx to the end of the array, and moves on to the next element. Note that the newly added elements could be processed by the robot later. Otherwise, if q is not divisible by x, the robot shuts down.
Please determine the sum of all values of the array at the end of the process".
This is the code:
#include <iostream>
#include <cstdlib>
#include <vector>
using namespace std;
int main()
{
vector<int> vec;
vector<int> ans;
int temp;
int t;
cin >> t;
int a = 0;
int n, x;
for(int i=0; i<t; i++){
cin >> n >> x;
while(a<n){
cin >> temp;
a++;
vec.push_back(temp);
}
int q = 0;
while(true){
if(vec[q]%x == 0){
for(int copies=0; copies<x; copies++){
vec.push_back(vec[q]/x);
}
}
else{
break;
}
q++;
}
int sum = 0;
for(int z: vec){
sum += z;
}
ans.push_back(sum);
vec.clear();
a = 0;
}
for(int y: ans){
cout << y << endl;
}
return 0;
}
Thanks.
You don't need to build the array as specified to compute the sum
You might do:
int pow(int x, int n)
{
int res = 1;
for (int i = 0; i != n; ++i) {
res *= x;
}
return res;
}
int compute(const std::vector<int>& vec, int x)
{
int res = 0;
int i = 0;
while (true) {
const auto r = pow(x, i);
for (auto e : vec) {
if (e % r != 0) {
return res;
}
res += e;
}
++i;
}
}
Demo
Consider:
If you find an indivisible number in the original array, you're going to stop before you reach the numbers you have added (so they don't affect the result).
If you add q/x to the array but q/x isn't divisible by x, you're going to stop there when you reach it, if you haven't already stopped earlier. (On the other hand, if q/x is divisible by x, the sum of x copies of q/x is q, so adding them is equivalent to adding q.)
So you don't need to expand the array, you just need to sum the elements and - on the side - keep the sum of all the numbers you would have expanded with until you find one that is not a multiple of x.
Then you either add that to the sum of the array or not, depending on whether you reached the end of the array.
I have a task which I have been trying to solve for the last week. It's driving me crazy. The task is:
Given a node count N(1 <= N <= 10`000),
nonadjacent node pair count M(1 <= M <= 200`000)
and the nonadjacent node pairs themselves
M0A, M0B,
M1A, M1B,
...
MM-1A, MM-1B,
find the maximum clique.
I am currently trying all kinds of bron-kerbosch algorithm variations.
But every time I get a time limit on the testing site. I posted the only code that doesn't have a time limit BUT it has a wrong answer. The code is kind of optimized by not creating a new set every recursion.
Anyways, PLEASE help me. I am a desperate latvian teen programmer. I know this problem can be solved, because many people have solved it on the testing site.
#include <set>
#include <vector>
std::map<int, std::set<int> > NotAdjacent;
unsigned int MaxCliqueSize = 0;
void PrintSet(std::set<int> &s){
for(auto it = s.begin(); it!=s.end(); it++){
printf("%d ",*it);
}
printf("\n");
}
void Check(std::set<int> &clique, std::set<int> &left){
//printf("printing clique: \n");
//PrintSet(clique);
//printf("printing left: \n");
//PrintSet(left);
if(left.empty()){
//PrintSet(clique);
if(clique.size()>MaxCliqueSize){
MaxCliqueSize = clique.size();
}
return;
}
while(left.empty()==false){
std::vector<int> removed;
int v = *left.begin();
left.erase(left.begin());
for(auto it2=NotAdjacent[v].begin();it2!=NotAdjacent[v].end();it2++){
auto findResult = left.find(*it2);
if(findResult!=left.end()){
removed.push_back(*it2);
left.erase(findResult);
}
}
clique.insert(v);
Check(clique, left);
clique.erase(v);
for(unsigned int i=0;i<removed.size();i++){
left.insert(removed[i]);
}
}
}
int main(){
int n, m;
scanf("%d%d",&n,&m);
int a, b;
for(int i=0;i<m;i++){
scanf("%d%d",&a,&b);
NotAdjacent[a].insert(b);
NotAdjacent[b].insert(a);
}
std::set<int> clique, left;
for(int i=1;i<=n;i++){
left.insert(i);
}
Check(clique, left);
printf("%d",MaxCliqueSize);
}
For what it's worth, this code seems to pass 5 tests and I think all the rest exceed either time or memory limits (submitted as C++11). This idea is to find a maximum independent set in the graph complement, for which we readily receive the edges for. The algorithm is what I could understand of the standard greedy one. Perhaps this can give you or others more ideas? I believe there are some improved algorithms for MIS.
#include <iostream>
using namespace std;
#include <map>
#include <set>
#include <vector>
#include <algorithm>
std::map<int, std::set<int> > NotAdjacent;
vector<int> Order;
unsigned int NumConnectedToAll = 0;
unsigned int MaxCliqueSize = 0;
bool sortbyN(int a, int b){
return (NotAdjacent[a].size() > NotAdjacent[b].size());
}
void mis(std::set<int> &g, unsigned int i, unsigned int size){
if (g.empty() || i == Order.size()){
if (size + NumConnectedToAll > MaxCliqueSize)
MaxCliqueSize = size + NumConnectedToAll;
return;
}
if (g.size() + size + NumConnectedToAll <= MaxCliqueSize)
return;
while (i < Order.size() && g.find(Order[i]) == g.end())
i++;
int v = Order[i];
std::set<int> _g;
_g = g;
_g.erase(v);
for (auto elem : NotAdjacent[v])
_g.erase(elem);
mis(_g, i + 1, size + 1);
}
int main(){
int n, m;
scanf("%d%d",&n,&m);
int a, b;
for(int i=0;i<m;i++){
scanf("%d%d",&a,&b);
NotAdjacent[a].insert(b);
NotAdjacent[b].insert(a);
}
std::set<int> g;
Order.reserve(NotAdjacent.size());
for (auto const& imap: NotAdjacent){
Order.push_back(imap.first);
g.insert(imap.first);
}
sort(Order.begin(), Order.end(), sortbyN);
for (int i=1; i<=n; i++)
if (NotAdjacent.find(i) == NotAdjacent.end())
NumConnectedToAll++;
for (unsigned int i=0; i<Order.size(); i++){
mis(g, i, 0);
g.erase(Order[i]);
}
printf ("%d", MaxCliqueSize);
return 0;
}
I was given this challenge in a programming "class". Eventually I decided to go for the "Binary Indexed Trees" solution, as data structures are a thing I'd like to know more about. Implementing BIT was somewhat straight forward, things after that - not so much. I ran into "Fatal Signal 11" when uploading the solution to the server, which, from what I've read, is somewhat similar to a Null pointer exception. Couldn't figure out the problem, decided to check out other solutions with BIT but stumbled upon the same problem.
#include<iostream>
using namespace std;
/* <BLACK MAGIC COPIED FROM geeksforgeeks.org> */
int getSum(int BITree[], int index){
int sum = 0;
while (index > 0){
sum += BITree[index];
index -= index & (-index);
}
return sum;
}
void updateBIT(int BITree[], int n, int index, int val){
while (index <= n){
BITree[index] += val;
index += index & (-index);
}
}
/* <BLACK MAGIC COPIED FROM geeksforgeeks.org> */
int Count(int arr[], int x){
int sum = 0;
int biggest = 0;
for (int i=0; i<x; i++) {
if (biggest < arr[i]) biggest = arr[i];
}
int bit[biggest+1];
for (int i=1; i<=biggest; i++) bit[i] = 0;
for (int i=x-1; i>=0; i--)
{
sum += getSum(bit, arr[i]-1);
updateBIT(bit, biggest, arr[i], 1);
}
return sum;
}
int main(){
int x;
cin >> x;
int *arr = new int[x];
for (int temp = 0; temp < x; temp++) cin >> arr[temp];
/*sizeof(arr) / sizeof(arr[0]); <-- someone suggested this,
but it doesn't change anything from what I can tell*/
cout << Count(arr,x);
delete [] arr;
return 0;
}
I am quite stumped on this. It could be just some simple thing I'm missing, but I really don't know. Any help is much appreciated!
You have condition that every number lies between 1 and 1018. So, your biggest number can be 1018. This is too much for the following line:
int bit[biggest+1];
I'm attempting to create a program that will multiply 3 not equal positions from vector 1 ('V1'), and find the maximum multiplication.
I'm using 3 'for' loops for counting and writing. The program gets the position amount 'N', then all 'N' numbers in 'input.txt'. After that, it gets the greatest position 'max' and writes it in 'output.exe'.
But I need to keep the program as efficient as possible, 16 MB memory limit and 1 second time limit (I get 1.004 second and 33 MB). Is there a more efficient way to do this?
#include <vector>
#include <fstream>
#include <iostream>
#include <algorithm>
using namespace std;
int main()
{
int N;
long long max = -9223372036854775807;
int input;
vector<long long> V1;
ifstream file1;
file1.open("input.txt");
file1 >> N;
V1.resize(N);
for (int i = 0; i < N; i++)
{
file1 >> V1[i];
}
for (int i = 0; i < N; i++)
for (int j = 0; j < i; j++)
for (int k = 0; k < j; k++)
if (V1[i] * V1[j] * V1[k] > max)
{
max = V1[i] * V1[j] * V1[k];
}
ofstream file2;
file2.open("output.txt");
file2 << max;
file2.close();
}
File Input.txt
5
10 10 10 -300 - 300
From looking at what you have done, you have to find the greatest of the product of 3 numbers in a given input vector.
Just sort vector V1 and output the max of (product of 1st 3 elements or 1st and last 2 elements). This is efficient in both space and time.
Like this:
sort(V1.begin(),V1.end(),greater<int>()) //sorts in descending order
int n = V1.size()-1;
output max(V1[0] * V1[1] * V1[2], V1[0] * V1[n] * V1[n-1])
The first, which comes to my mind - why do you store these values? You only need the single maximum value - there is no need to store all these values, push them, and, moreover, sort them out.
Another important notices:
You have a vector of long long, but you read ints. Since you have big numbers in your input, use long long everywhere
Pushing an item and popping it back is senseless - you should have checked it before pushing to avoid two unnecessary operations
Anyway, you don't need to compare i, j, k for equivalence at all - they are never equal according to your loop restrictions
Pushing items to an array when you know their number is wrong. It takes more time to extend a vector. You may want to resize it to the given size.
Probably, this code will meet your memory \ time requirements:
int N;
long long maximum = -9223372036854775807; // Subject to limits.h LLONG_MIN usage
vector<long long> V1;
ifstream file1;
file1.open("input.txt");
file1 >> N;
V1.resize(N);
for (int i = 0; i < N; i++){
file1 >> V1[i];
}
file1.close();
for (int i = 0; i < N; i++)
for (int j = 0; j < i; j++)
for (int k = 0; k < j; k++)
if (V1[i] * V1[j] * V1[k] > maximum)
maximum = V1[i] * V1[j] * V1[k];
ofstream file2;
file2.open("output.txt");
file2 << maximum;
file2.close();
Well, as soon as I see size and time reduction, I tend to remove all unnecessary language goodies, because they do help in proper programming but only come at a resource expense.
So if you really wanted to keep all products of different indices of a list of values, I would advice you to throw away vectors, push and pop and use fixed size arrays.
But before that low-level optimisation, we must think of all possible algorithmic optimisations. You only want be biggest products from all possible from three different values taken from a list. But for positive numbers, a >= b <=> a *c >= b *c and the product of two negative numbers is positive.
So the highest product may only come from:
product of three highest positive values
product of one highest positive value and two lowest negative values (highest in absolute value)
product of three highest negative values if there are no positive values
So you do not even need to load the full initial vector but just keep:
Three highest positive values
Three highest negative values
Two lowest negative values
You get them by storing them at read time in O(n) time and only store eight values. If you only have five values, it is not efficient at all, but it will be linear in time and constant in size whatever number of values you process.
Possible implementation:
#include <iostream>
#include <fstream>
#include <climits>
using namespace std;
class Max3 {
long long pmax[3];
long long nmax[3];
long long nmin[2];
void push(long long *record, long long val, size_t pos) {
for(size_t i=0; i<pos; i++) {
record[i] = record[i + 1];
}
record[pos] = val;
}
void set(long long *record, long long val, size_t sz) {
for (size_t i=1; i<sz; i++) {
if (val < record[i]) {
push(record, val, i - 1);
return;
}
}
push(record, val, sz - 1);
}
public:
Max3() {
size_t i;
for (i=0; i<sizeof(pmax)/sizeof(pmax[0]); i++)
pmax[i] = 0;
for (i=0; i<sizeof(nmin)/sizeof(nmin[0]); i++)
nmin[i] = 0;
for (i=0; i<sizeof(nmax)/sizeof(nmax[0]); i++)
nmax[i] = LLONG_MIN;
}
void test(long long val) {
if (val >= *pmax) {
set(pmax, val, 3);
}
else if (val <= 0) {
if (val <= *nmin) {
set(nmin, -val, 2);
}
if (val >= *nmax) {
set(nmax, val, 3);
}
}
}
long long getMax() {
long long max = 0, prod, pm;
if ((prod = pmax[0] * pmax[1] * pmax[2]) > max)
max = prod;
if (pmax[2] > 0)
pm = pmax[2];
else if (pmax[1] > 0)
pm = pmax[1];
else
pm = pmax[0];
if ((prod = nmin[0] * nmin[1] * pm) > max)
max = prod;
if ((prod = nmax[0] * nmax[1] * nmax[2]) > max)
max = prod;
return max;
}
};
int main() {
int N;
long long input;
Max3 m3;
ifstream file1;
file1.open("input.txt");
file1 >> N;
for (int i = 0; i < N; i++){
file1 >> input;
m3.test(input);
}
file1.close();
ofstream file2;
file2.open("output.txt");
file2 << m3.getMax();
file2.close();
return 0;
}
The code is slightly more complex, but the program size is only 35 KB, with little dynamic allocation.
After replacing the 'for' loops with a sort of vector 1 'V1' (in descending order), the program compares the products 'V1[0] * V1[1] * V1[2]' and 'V1[0] * V1[N] * V1[N - 1', and then prints the maximum in file output.txt:
#include <vector>
#include <fstream>
#include <iostream>
#include <algorithm>
#include <functional>
using namespace std;
int main()
{
int N;
long long max = -9223372036854775807;
int input;
vector<long long> V1;
ifstream file1;
file1.open("input.txt");
file1 >> N;
V1.resize(N);
for (int i = 0; i < N; i++){
file1 >> V1[i];
}
sort(V1.begin(), V1.end(), greater<int>());
N -= 1;
max = V1[0] * V1[1] * V1[2];
if (max < V1[0] * V1[N] * V1[N - 1])
max = V1[0] * V1[N] * V1[N - 1];
ofstream file2;
file2.open("output.txt");
file2 << max;
file2.close();
}
I have been working on this for 24 hours now, trying to optimize it. The question is how to find the number of trailing zeroes in factorial of a number in range of 10000000 and 10 million test cases in about 8 secs.
The code is as follows:
#include<iostream>
using namespace std;
int count5(int a){
int b=0;
for(int i=a;i>0;i=i/5){
if(i%15625==0){
b=b+6;
i=i/15625;
}
if(i%3125==0){
b=b+5;
i=i/3125;
}
if(i%625==0){
b=b+4;
i=i/625;
}
if(i%125==0){
b=b+3;
i=i/125;
}
if(i%25==0){
b=b+2;
i=i/25;
}
if(i%5==0){
b++;
}
else
break;
}
return b;
}
int main(){
int l;
int n=0;
cin>>l; //no of test cases taken as input
int *T = new int[l];
for(int i=0;i<l;i++)
cin>>T[i]; //nos taken as input for the same no of test cases
for(int i=0;i<l;i++){
n=0;
for(int j=5;j<=T[i];j=j+5){
n+=count5(j); //no of trailing zeroes calculted
}
cout<<n<<endl; //no for each trialing zero printed
}
delete []T;
}
Please help me by suggesting a new approach, or suggesting some modifications to this one.
Use the following theorem:
If p is a prime, then the highest
power of p which divides n! (n
factorial) is [n/p] + [n/p^2] +
[n/p^3] + ... + [n/p^k], where k is
the largest power of p <= n, and [x] is the integral part of x.
Reference: PlanetMath
The optimal solution runs in O(log N) time, where N is the number you want to find the zeroes for. Use this formula:
Zeroes(N!) = N / 5 + N / 25 + N / 125 + ... + N / 5^k, until a division becomes 0. You can read more on wikipedia.
So for example, in C this would be:
int Zeroes(int N)
{
int ret = 0;
while ( N )
{
ret += N / 5;
N /= 5;
}
return ret;
}
This will run in 8 secs on a sufficiently fast computer. You can probably speed it up by using lookup tables, although I'm not sure how much memory you have available.
Here's another suggestion: don't store the numbers, you don't need them! Calculate the number of zeroes for each number when you read it.
If this is for an online judge, in my experience online judges exaggerate time limits on problems, so you will have to resort to ugly hacks even if you have the right algorithm. One such ugly hack is to not use functions such as cin and scanf, but instead use fread to read a bunch of data at once in a char array, then parse that data (DON'T use sscanf or stringstreams though) and get the numbers out of it. Ugly, but a lot faster usually.
This question is from codechef.
http://www.codechef.com/problems/FCTRL
How about this solution:
#include <stdio.h>
int a[] = {5, 25, 125, 625, 3125, 15625, 78125, 390625, 1953125, 9765625, 48828125, 244140625};
int main()
{
int i, j, l, n, ret = 0, z;
scanf("%d", &z);
for(i = 0; i < z; i++)
{
ret = 0;
scanf("%d", &n);
for(j = 0; j < 12; j++)
{
l = n / a[j];
if(l <= 0)
break;
ret += l;
}
printf("%d\n", ret);
}
return 0;
}
Any optimizations???
Knows this is over 2 years old but here's my code for future reference:
#include <cmath>
#include <cstdio>
inline int read()
{
char temp;
int x=0;
temp=getchar_unlocked();
while(temp<48)temp=getchar_unlocked();
x+=(temp-'0');
temp=getchar_unlocked();
while(temp>=48)
{
x=x*10;
x+=(temp-'0');
temp=getchar_unlocked();
}
return x;
}
int main()
{
int T,x,z;
int pows[]={5,25,125,625,3125,15625,78125,390625,1953125,9765625,48828125,244140625};
T=read();
for(int i=0;i<T;i++)
{
x=read();
z=0;
for(int j=0;j<12 && pows[j]<=x;j++)
z+=x/pows[j];
printf("%d\n",z);
}
return 0;
}
It ran in 0.13s
Here is my accepted solution. Its score is 1.51s, 2.6M. Not the best, but maybe it can help you.
#include <iostream>
using namespace std;
void calculateTrailingZerosOfFactoriel(int testNumber)
{
int numberOfZeros = 0;
while (true)
{
testNumber = testNumber / 5;
if (testNumber > 0)
numberOfZeros += testNumber;
else
break;
}
cout << numberOfZeros << endl;
}
int main()
{
//cout << "Enter number of tests: " << endl;
int t;
cin >> t;
for (int i = 0; i < t; i++)
{
int testNumber;
cin >> testNumber;
calculateTrailingZerosOfFactoriel(testNumber);
}
return 0;
}
#include <cstdio>
int main(void) {
long long int t, n, s, i, j;
scanf("%lld", &t);
while (t--) {
i=1; s=0; j=5;
scanf("%lld", &n);
while (i != 0) {
i = n / j;
s = s + i * (2*j + (i-1) * j) / 2;
j = j * 5;
}
printf("%lld\n", s);
}
return 0;
}
You clearly already know the correct algorithm. The bottleneck in your code is the use of cin/cout. When dealing with very large input, cin is extremely slow compared to scanf.
scanf is also slower than direct methods of reading input such as fread, but using scanf is sufficient for almost all problems on online judges.
This is detailed in the Codechef FAQ, which is probably worth reading first ;)