Find greatest amount using dynamic programming - c++

Given a coin n(<=10^9), I can exchange it for 3 coins:n/2,n/3 and n/4 (where / represents floor division). What is the greatest amount I can make? My code is:
#include <iostream>
using namespace std;
int a[10000000];
long int coin(long int n){
if(n<10000000){
return a[n];
}
else{
return(max(n,coin(n/2)+coin(n/3)+coin(n/4)));
}
}
int main()
{
//cout << "Hello World!" << endl;
long int n,ans;
int i;
a[0]=0;
for(i=1;i<10000000;i++){
a[i]=max(i,a[i/2]+a[i/3]+a[i/4]);
}
while(cin>>n){
if(n<10000000){
cout<<a[n]<<endl;
}
else{
ans=coin(n);
cout<<ans<<endl;
}
}
return 0;
}
How can I improve its time and space complexity?
Problem:https://www.hackerearth.com/problem/algorithm/bytelandian-gold-coins/description/

A few thoughts, no definite answer yet.
First, your approach is quite reasonable imo. You have numbers up to 10^9, which you cannot preprocess all. Instead, you take into account that the smaller numbers "somehow" are picked more often by the process, and so you memoize only up to a certain upper boundary, here 10^7.
An easy improvement in your basic algorithm is by realizing that you need to memoize only multiples of 2 or 3. All other inputs can easily be related to those numbers in the count function.
Another optimization could be to vary the upper bound 10^7 empirically. That is, choose some values between, say, 10^5 and 10^8 and then hand in the one with the minimum execution time.
Improving this basic approach is not trivial, but the way to improve it is by getting insight into the number selection procedure. Basically, one should memoize those numbers which are selected more often, and leave those numbers out which are picked only few times.
One could do a lot here, but usually the required results on which the memoization procedure is based have to be generated on-the-fly in the program which you hand in to the contest. I guess this makes it hard to come up with competitive solutions. I could imagine that simple rules of the form "memoize all below 10.000", "memoize multiples of 5 above 10.000", "memoize multiples of 7 above 10.000" and so on could be useful. Such rules can be easily encoded into the program without requiring too much memory. They could be found in advance by genetic algorithms, for example.
For an exact approach, one can assume a uniform distribution of the coin numbers in the problem. Then one can loop over all numbers i up to 10^9 and aquire how often each number k<i is chosen by the procedure. The result is an array count[i]. Next you pick a lower boundary L for count[i] and memoize all numbers i where count[i]>=L. However, as mentioned, this procedure is too costly as it has to be done in the run itself.
What you could do instead is to pick only, say, the N most-often picked numbers, and hard-code them in the code. The actual number N of included memoizaion numbers can be determined by the memory constraint in the task.

Related

How to find a better algorithm taking less Execution Time?

#include <iostream>
using namespace std;
int main() {
int n;
cin>>n;
int *arr=new int [n];
for(int k=0;n>k;k++)
{
cin>>*(arr+k);
}
long long sum1=0,sum2=0,sum3=0;
for(int k=0;n>k;k++)
{
sum1=sum1+*(arr+k);
if(*(arr+k)%2==0)
sum2++;
else
sum3++;
}
cout<<sum1<<" ";
cout<<sum3<<" ";
cout<<sum2;
return 0;
}
You're given a sequence of N integers, your task is to print sum of them, number of odd integers, and number of even integers respectively.
Input
The first line of input contains an integer N (1≤N≤10⁵).
The second line of input contains N integers separated by a single space (1≤Ai≤10⁵).
Output
Print the sum of them, number of odd integers, and number of even integers respectively, separated by a space.
Examples
input
5
1 2 3 4 5
output
15 3 2
Is there a better algorithm for this code? I need it to take less Execution Time.
Where can I find better algorithms for any code?
Unless you need to re-use the N integers that you have stored in the array, there's no point in storing them. You can get the sum as well as number of odd/even integers as you input them.
Additionally, you don't need long long as the input will never get that big, unless you mean 10^5?
Further, whenever you are thinking about improving performance you should take a look at the big O which in this case is O(N) where N is the number of integers that you have. From an algorithm point of view with N input there's generally very little that you can do to improve this. Maybe if we're talking streams, you can do some statistics but otherwise this implementation is as good as it gets. In some other situations, while the worst case can't be improved, we can improve the average case, which I don't think is applicable here.
Then you should look at profiling the code. That way you have a clear understanding of where bottlenecks are. For your code, there's probably not too much that can be done reasonably.
If we're trying to squeeze every ounce of performance possible, adjusting the compiler flags can bring some performance gains. You should research these but I would not prioritize this over the above.
I would also improve how you name your variables, but this has no impact on performance.
Actually C++ by default synchronizes cin/cout with C way of doing I/O - printf/scanf, which slows down I/O by quite a lot.
Switching to printf/scanf or adding something like ios::sync_with_stdio(0); at the start of main should speed this up a few times.

pseudo random distribution which guarantees all possible permutations of value sequence - C++

Random question.
I am attempting to create a program which would generate a pseudo-random distribution. I am trying to find the right pseudo-random algorithm for my needs. These are my concerns:
1) I need one input to generate the same output every time it is used.
2) It needs to be random enough that a person who looks at the output from input 1 sees no connection between that and the output from input 2 (etc.), but there is no need for it to be cryptographically secure or truly random.
3)Its output should be a number between 0 and (29^3200)-1, with every possible integer in that range a possible and equally (or close to it) likely output.
4) I would like to be able to guarantee that every possible permutation of sequences of 410 outputs is also a potential output of consecutive inputs. In other words, all the possible groupings of 410 integers between 0 and (29^3200)-1 should be potential outputs of sequential inputs.
5) I would like the function to be invertible, so that I could take an integer, or a series of integers, and say which input or series of inputs would produce that result.
The method I have developed so far is to run the input through a simple halson sequence:
boost::multiprecision::mpz_int denominator = 1;
boost::multiprecision::mpz_int numerator = 0;
while (input>0) {
denominator *=3;
numerator = numerator * 3 + (input%3);
input = input/3;
}
and multiply the result by 29^3200. It meets requirements 1-3, but not 4. And it is invertible only for single integers, not for series (since not all sequences can be produced by it). I am working in C++, using boost multiprecision.
Any advice someone can give me concerning a way to generate a random distribution meeting these requirements, or just a class of algorithms worth researching towards this end, would be greatly appreciated. Thank you in advance for considering my question.
----UPDATE----
Since multiple commenters have focused on the size of the numbers in question, I just wanted to make clear that I recognize the practical problems that working with such sets poses but in asking this question I'm interested only in the theoretical or conceptual approach to the problem - for example, imagine working with a much smaller set of integers like 0 to 99, and the permutations of sets of 10 of output sequences. How would you design an algorithm to meet these five conditions - 1)input is deterministic, 2)appears random (at least to the human eye), 3)every integer in the range is a possible output, 4)not only all values, but also all permutations of value sequences are possible outputs, 5)function is invertible.
---second update---
with many thanks to #Severin Pappadeux I was able to invert an lcg. I thought I'd add a little bit about what I did to hopefully make it easier for anyone seeing this in the future. First of all, these are excellent sources on inverting modular functions:
https://www.khanacademy.org/computing/computer-science/cryptography/modarithmetic/a/modular-inverses
https://www.khanacademy.org/computer-programming/discrete-reciprocal-mod-m/6253215254052864
If you take the equation next=ax+c%m, using the following code with your values for a and m will print out the euclidean equations you need to find ainverse, as well as the value of ainverse:
int qarray[12];
qarray[0]=0;
qarray[1]=1;
int i =2;
int reset = m;
while (m % a >0) {
int remainder=m%a;
int quotient=m/a;
std::cout << m << " = " << quotient << "*" << a << " + " << remainder << "\n";
qarray[i] =qarray[i-2]-(qarray[i-1]*quotient);
m=a;
a=remainder;
i++;
}
if (qarray[i-1]<0) {qarray[i-1]+=reset;}
std::cout << qarray[i-1] << "\n";
The other thing it took me a while to figure out is that if you get a negative result, you should add m to it. You should add a similar term to your new equation:
prev = (ainverse(next-c))%m;
if (prev<0) {prev+=m;}
I hope that helps anyone who ventures down this road in the future.
Ok, I'm not sure if there is a general answer, so I would concentrate on random number generator having, say, 64bit internal state/seed, producing 64bit output and having 2^64-1 period. In particular, I would look at linear congruential generator (aka LCG) in the form of
next = (a * prev + c) mod m
where a and m are primes to each other
So:
1) Check
2) Check
3) Check (well, for 64bit space of course)
4) Check (again, except 0 I believe, but each and every permutation of 64bits is output of LCG starting with some seed)
5) Check. LCG is known to be reversible, i.e. one could get
prev = (next - c) * a_inv mod m
where a_inv could be computed from a, m using Euclid's algorithm
Well, if it looks ok to you, you could try to implement LCG in your 15546bits space
UPDATE
And quick search shows reversible LCG discussion/code here
Reversible pseudo-random sequence generator
In your update, "appears random (to the human eye)" is the phrasing you use. The definition of "appears random" is not a well agreed upon topic. There are varying degrees of tests for "randomness."
However, if you're just looking to make it appear random to the human eye, you can just use ring multiplication.
Start with the idea of generating N! values between 0 and M (N>=410, M>=29^3200)
Group this together into one big number. we're going to generate a single number ranging from 0 to *M^N!. If we can show that the pseudorandom number generator generates every value from 0 to M^N!, we guarantee your permutation rule.
Now we need to make it "appear random." To the human eye, Linear Congruent Generators are enough. Pick a LCG with a period greater than or equal to 410!*M^N satisfying the rules to ensure a complete period. Easiest way to ensure fairness is to pick a LCG in the form x' = (ax+c) mod M^N!
That'll do the trick. Now, the hard part is proving that what you did was worth your time. Consider that the period of just a 29^3200 long sequence is outside the realm of physical reality. You'll never actually use it all. Ever. Consider that a superconductor made of Josephine junctions (10^-12kg processing 10^11bits/s) weighing the mass of the entire universe 3*10^52kg) can process roughly 10^75bits/s. A number that can count to 29^3200 is roughly 15545 bits long, so that supercomputer can process roughly 6.5x10^71 numbers/s. This means it will take roughly 10^4600s to merely count that high, or somewhere around 10^4592 years. Somewhere around 10^12 years from now, the stars are expected to wink out, permanently, so it could be a while.
There are M**N sequences of N numbers between 0 and M-1.
You can imagine writing all of them one after the other in a (pseudorandom) sequence and placing your read pointer randomly in the resulting loop of N*(M**N) numbers between 0 and M-1...
def output(input):
total_length = N*(M**N)
index = input % total_length
permutation_index = shuffle(index / N, M**N)
element = input % N
return (permutation_index / (N**element)) % M
Of course for every permutation of N elements between 0 and M-1 there is a sequence of N consecutive inputs that produces it (just un-shuffle the permutation index). I'd also say (just using symmetry reasoning) that given any starting input the output of next N elements is equally probable (each number and each sequence of N numbers is equally represented in the total period).

Permutations of English Alphabet of a given length

So I have this code. Not sure if it works because the runtime for the program is still continuing.
void permute(std::vector<std::string>& wordsVector, std::string prefix, int length, std::string alphabet) {
if (length == 0) {
//end the recursion
wordsVector.push_back(prefix);
}
else {
for (int i = 0; i < alphabet.length(); ++i) {
permute(wordsVector, prefix + alphabet.at(i), length - 1, alphabet);
}
}}
where I'm trying to get all combinations of characters in the English alphabet of a given length. I'm not sure if the approach is correct at the moment.
Alphabet consists of A-Z in a string of length 26. WordsVectors holds all the different combinations of words. prefix is meant to pass through recursively until a word is made and length is self explanatory.
Example, if I give the length of 7 to the function, I expect a size of 26 x 25 x 24 x 23 x 22 x 21 x 20 = 3315312000 if I'm correct, following the formula for permutations.
I don't think programs are meant to run this long so either I'm hitting an infinite loop or something is wrong with my approach. Please advise. Thanks.
Surely the stack would overflow but concentrating on your question even if you write an iterative program it will take a long time ( not an infinite loop just very long )
[26L, 650L, 15600L, 358800L, 7893600L, 165765600L, 3315312000L, 62990928000L, 1133836704000L, 19275223968000L, 308403583488000L, 4626053752320000L, 64764752532480000L, 841941782922240000L, 10103301395066880000L, 111136315345735680000L, 1111363153457356800000L, 10002268381116211200000L, 80018147048929689600000L, 560127029342507827200000L, 3360762176055046963200000L, 16803810880275234816000000L, 67215243521100939264000000L, 201645730563302817792000000L, 403291461126605635584000000L, 403291461126605635584000000L]
The above list is the number of possibilities for 1<=n<=26. You can see as n increases number of possibilities increases tremendously. Say you have 1GHz processor that does 10^9 operations per second. Say consider number of possibilities for n=26 its 403291461126605635584000000L. Its evident that if you sit down to list all possibilities its so so long ( so so many years ) that
you will feel it has hit an infinite loop. Finally I have not looked that closely into your code , but in nutshell even if you write it correctly,iteratively and don't store (again can't have this much memory) and just print all possibilities its going to take long time for larger values of n.
EDIT
As jaromax and others said if you just want to write it for smaller values of n,
say less than 10-12 you can write an iterative program to list/print them. It will run quite fast for small values. But if you also want to store them them then n will have to be say less than 5 say. (Really depends on how much RAM is available or you could find some permutations write them to disk, then depends on how much disk memory you can spare, again refer the number of possibilities list I posted above. It gives a rough idea of both time and space complexity).
I think there could be quite a problem that you do this on stack. A large part of the calculation you do recursively and this means every time allocated space for function.
Try to reformulate it linearly. I think I had such a problem before.
Your question implies you think there are 26x25x24x ... permutations
Your code doesn't have anything I can see to avoid "AAAAAAA" being a permutation, in which case there are 26x26x26x ...
So in addition to being a very complicated way of counting in base 26, I think it's also giving bad answers?

Factorizing a number

I've got a number which is less than 500,000,000 and I want to factorize it in an efficient way. What algorithm do you suggest? Note: I have a time limit of 0.01 sec!
I've just written this C++ code but it's absolutely awful!
void factorize(int x,vector<doubly> &factors)
{
for(int i=2;i<=x;i++)
{
if(x%i==0)
{
doubly a;
a.number=i;
a.power=0;
while(x%i==0)
{
a.power++;
x/=i;
}
factors.push_back(a);
}
}
}
and doubly is something like this:
struct doubly
{
int number;
int power;
//and some functions!!
};
just another point: I know that n is not a prime
As you might know, factorization is a hard problem. You might also know that you only have to test divisibility with primes. A small, but well known hint: You only have to test up to the square root of n. I leave the reasoning to you.
Look at the sieve of Eratosthenes. And maybe you find a hint in these questions and answers? How about that one?
If you want to make this faster even - without the full trade of in space/time of this answer - calculate all prime numbers up to square root of 500,000,000 in advance and put them into an array. Obviously this is broken when the upper limit grows ;)
Start to study the algorithms.
What is the fastest factorization algorithm?
Factorize all the integers up to 500,000,000 in advance (doesn't matter how) and store the factors in a database or fixed-length record format. Your lookup will be fast, and the database ought to fit onto a modern PC.
This is one end of the time/space tradeoff, but you didn't say what you're trying to optimize for.
Alternatively, look at the algorithm for GNU coreutils "factor".
You may try Pollard's rho heuristic, it's suitable for complex numbers with relatively small divisors:
Pollard's rho
If this is a homework assignment, I believe you should re-read your lecture material.
Anyway, you know your number is composite and very small, that's fine.
For a naive trial-division with all numbers, you need sqrt(500000000) tests at most - that's about 22360 times for worst-case. You can obviously skip even numbers since they're divisible with 2 (check that first). So then this becomes 11180 divisions for 0.01 s. If your computer can do 1.1 M divisions per second then you can just use the naive approach.
Or, you can make a list of primes off-line, up to sqrt(500M) and then trial-try each of those. This will cut down on divisions some more.
Or, if the factors are not too far away from each other, you could try Fermat's method.
If these won't work, you can try to use Pollard's rho and others.
Or, if this is not homework, restate the problem to work around the limitations (as some have suggested, can you precompute the factored numbers beforehand etc.).

Calculating large factorials in C++

I understand this is a classic programming problem and therefore I want to be clear I'm not looking for code as a solution, but would appreciate a push in the right direction. I'm learning C++ and as part of the learning process I'm attempting some programming problems. I'm attempting to write a program which deals with numbers up to factorial of 1billion. Obviously these are going to be enormous numbers and way too big to be dealing with using normal arithmetic operations. Any indication as to what direction I should go in trying to solve this type of problem would be appreciated.
I'd rather try to solve this without using additional libraries if possible
Thanks
PS - the problem is here http://www.codechef.com/problems/FCTRL
Here's the method I used to solve the problem, this was achieved by reading the comments below:
Solution -- The number 5 is a prime factor of any number ending in zero. Therefore, dividing the factorial number by 5, recursively, and adding the quotients, you get the number of trailing zeros in the factorial result
E.G. - Number of trailing zeros in 126! = 31
126/5 = 25 remainder 1
25/5 = 5 remainder 0
5/5 = 1 remainder 0
25 + 5 + 1 = 31
This works for any value, just keep dividing until the quotient is less
than 5
Skimmed this question, not sure if I really got it right but here's a deductive guess:
First question - how do you get a zero on the end of the number? By multiplying by 10.
How do you multiply by 10? either by multiplying by either a 10 or by 2 x 5...
So, for X! how many 10s and 2x5s do you have...?
(luckily 2 & 5 are prime numbers)
edit: Here's another hint - I don't think you need to do any multiplication. Let me know if you need another hint.
Hint: you may not need to calculate N! in order to find the number of zeros at the end of N!
To solve this question, as Chris Johnson said you have to look at number of 0's.
The factors of 10 will be 1,2,5,10 itself. So, you can go through each of the numbers of N! and write them in terms of 2^x * 5^y * 10^z. Discard other factors of the numbers.
Now the answer will be greaterof(x,y)+z.
One interesting thing I learn from this question is, its always better to store factorial of a number in terms of prime factors for easy comparisons.
To actually x^y, there is an easy method used in RSA algorithm, which don't remember. I will try to update the post if I find one.
This isn't a good answer to your question as you've modified it a bit from what I originally read. But I will leave it here anyway to demonstrate the impracticality of actually trying to do the calculations by main brute force.
One billion factorial is going to be out of reach of any bignum library. Such numbers will require more space to represent than almost anybody has in RAM. You are going to have to start paging the numbers in from storage as you work on them. There are ways to do this. The guy who recently calculated π out to 2700 billion places used such a library
Do not use the naive method. If you need to calculate the factorial, use a fast algorithm: http://www.luschny.de/math/factorial/FastFactorialFunctions.htm
I think that you should come up with a way to solve the problem in pseudo code before you begin to think about C++ or any other language for that matter. The nature of the question as some have pointed out is more of an algorithm problem than a C++ problem. Those who suggest searching for some obscure library are pointing you in the direction of a slippery slope, because learning to program is learning how to think, right? Find a good algorithm analysis text and it will serve you well. In our department we teach from the CLRS text.
You need a "big number" package - either one you use or one you write yourself.
I'd recommend doing some research into "large number algorithms". You'll want to implement the C++ equivalent of Java's BigDecimal.
Another way to look at it is using the gamma function. You don't need to multiply all those values to get the right answer.
To start you off, you should store the number in some sort of array like a std::vector (a digit for each position in the array) and you need to find a certain algorithm that will calculate a factorial (maybe in some sort of specialized class). ;)
//SIMPLE FUNCTION TO COMPUTE THE FACTORIAL OF A NUMBER
//THIS ONLY WORKS UPTO N = 65
//CAN YOU SUGGEST HOW WE CAN IMPROVE IT TO COMPUTE FACTORIAL OF 400 PLEASE?
#include <iostream>
#include <cmath>
using namespace std;
int factorial(int x); //function to compute factorial described below
int main()
{
int N; //= 150; //you can also get this as user input using cin.
cout<<"Enter intenger\n";
cin>>N;
factorial(N);
return 0;
}//end of main
int factorial(int x) //function to compute the factorial
{
int i, n;
long long unsigned results = 1;
for (i = 1; i<=x; i++)
{
results = results * i;
}
cout<<"Factorial of "<<x<<" is "<<results<<endl;
return results;
}