I understand this is a classic programming problem and therefore I want to be clear I'm not looking for code as a solution, but would appreciate a push in the right direction. I'm learning C++ and as part of the learning process I'm attempting some programming problems. I'm attempting to write a program which deals with numbers up to factorial of 1billion. Obviously these are going to be enormous numbers and way too big to be dealing with using normal arithmetic operations. Any indication as to what direction I should go in trying to solve this type of problem would be appreciated.
I'd rather try to solve this without using additional libraries if possible
Thanks
PS - the problem is here http://www.codechef.com/problems/FCTRL
Here's the method I used to solve the problem, this was achieved by reading the comments below:
Solution -- The number 5 is a prime factor of any number ending in zero. Therefore, dividing the factorial number by 5, recursively, and adding the quotients, you get the number of trailing zeros in the factorial result
E.G. - Number of trailing zeros in 126! = 31
126/5 = 25 remainder 1
25/5 = 5 remainder 0
5/5 = 1 remainder 0
25 + 5 + 1 = 31
This works for any value, just keep dividing until the quotient is less
than 5
Skimmed this question, not sure if I really got it right but here's a deductive guess:
First question - how do you get a zero on the end of the number? By multiplying by 10.
How do you multiply by 10? either by multiplying by either a 10 or by 2 x 5...
So, for X! how many 10s and 2x5s do you have...?
(luckily 2 & 5 are prime numbers)
edit: Here's another hint - I don't think you need to do any multiplication. Let me know if you need another hint.
Hint: you may not need to calculate N! in order to find the number of zeros at the end of N!
To solve this question, as Chris Johnson said you have to look at number of 0's.
The factors of 10 will be 1,2,5,10 itself. So, you can go through each of the numbers of N! and write them in terms of 2^x * 5^y * 10^z. Discard other factors of the numbers.
Now the answer will be greaterof(x,y)+z.
One interesting thing I learn from this question is, its always better to store factorial of a number in terms of prime factors for easy comparisons.
To actually x^y, there is an easy method used in RSA algorithm, which don't remember. I will try to update the post if I find one.
This isn't a good answer to your question as you've modified it a bit from what I originally read. But I will leave it here anyway to demonstrate the impracticality of actually trying to do the calculations by main brute force.
One billion factorial is going to be out of reach of any bignum library. Such numbers will require more space to represent than almost anybody has in RAM. You are going to have to start paging the numbers in from storage as you work on them. There are ways to do this. The guy who recently calculated π out to 2700 billion places used such a library
Do not use the naive method. If you need to calculate the factorial, use a fast algorithm: http://www.luschny.de/math/factorial/FastFactorialFunctions.htm
I think that you should come up with a way to solve the problem in pseudo code before you begin to think about C++ or any other language for that matter. The nature of the question as some have pointed out is more of an algorithm problem than a C++ problem. Those who suggest searching for some obscure library are pointing you in the direction of a slippery slope, because learning to program is learning how to think, right? Find a good algorithm analysis text and it will serve you well. In our department we teach from the CLRS text.
You need a "big number" package - either one you use or one you write yourself.
I'd recommend doing some research into "large number algorithms". You'll want to implement the C++ equivalent of Java's BigDecimal.
Another way to look at it is using the gamma function. You don't need to multiply all those values to get the right answer.
To start you off, you should store the number in some sort of array like a std::vector (a digit for each position in the array) and you need to find a certain algorithm that will calculate a factorial (maybe in some sort of specialized class). ;)
//SIMPLE FUNCTION TO COMPUTE THE FACTORIAL OF A NUMBER
//THIS ONLY WORKS UPTO N = 65
//CAN YOU SUGGEST HOW WE CAN IMPROVE IT TO COMPUTE FACTORIAL OF 400 PLEASE?
#include <iostream>
#include <cmath>
using namespace std;
int factorial(int x); //function to compute factorial described below
int main()
{
int N; //= 150; //you can also get this as user input using cin.
cout<<"Enter intenger\n";
cin>>N;
factorial(N);
return 0;
}//end of main
int factorial(int x) //function to compute the factorial
{
int i, n;
long long unsigned results = 1;
for (i = 1; i<=x; i++)
{
results = results * i;
}
cout<<"Factorial of "<<x<<" is "<<results<<endl;
return results;
}
Related
#include <iostream>
using namespace std;
int main() {
int n;
cin>>n;
int *arr=new int [n];
for(int k=0;n>k;k++)
{
cin>>*(arr+k);
}
long long sum1=0,sum2=0,sum3=0;
for(int k=0;n>k;k++)
{
sum1=sum1+*(arr+k);
if(*(arr+k)%2==0)
sum2++;
else
sum3++;
}
cout<<sum1<<" ";
cout<<sum3<<" ";
cout<<sum2;
return 0;
}
You're given a sequence of N integers, your task is to print sum of them, number of odd integers, and number of even integers respectively.
Input
The first line of input contains an integer N (1≤N≤10⁵).
The second line of input contains N integers separated by a single space (1≤Ai≤10⁵).
Output
Print the sum of them, number of odd integers, and number of even integers respectively, separated by a space.
Examples
input
5
1 2 3 4 5
output
15 3 2
Is there a better algorithm for this code? I need it to take less Execution Time.
Where can I find better algorithms for any code?
Unless you need to re-use the N integers that you have stored in the array, there's no point in storing them. You can get the sum as well as number of odd/even integers as you input them.
Additionally, you don't need long long as the input will never get that big, unless you mean 10^5?
Further, whenever you are thinking about improving performance you should take a look at the big O which in this case is O(N) where N is the number of integers that you have. From an algorithm point of view with N input there's generally very little that you can do to improve this. Maybe if we're talking streams, you can do some statistics but otherwise this implementation is as good as it gets. In some other situations, while the worst case can't be improved, we can improve the average case, which I don't think is applicable here.
Then you should look at profiling the code. That way you have a clear understanding of where bottlenecks are. For your code, there's probably not too much that can be done reasonably.
If we're trying to squeeze every ounce of performance possible, adjusting the compiler flags can bring some performance gains. You should research these but I would not prioritize this over the above.
I would also improve how you name your variables, but this has no impact on performance.
Actually C++ by default synchronizes cin/cout with C way of doing I/O - printf/scanf, which slows down I/O by quite a lot.
Switching to printf/scanf or adding something like ios::sync_with_stdio(0); at the start of main should speed this up a few times.
So, I've got to write a recursive function which will count the number of "1"'s are in an any given number. For this part, I managed to create the function which converts a decimal to binary :
if dec = 0 then 0
else dec mod 2 + 10 *f(dec/2)
;;
but I've got no idea how to make the program check each digit and even count the wanted ones.
And I have to write a function which has to calculate the sum of the 1/(n!) series. Again, I tried for hours, but the best I could do was :
if n <= 1. then 1.
else 1./.(n*.e(n-.1.)) +. (e(n-.1.))
;;
which doesn't work the right way cause I guess the formula isn't right.
Can somebody help me? ;-; Please, I want to understand the way this works.
For the first question, you probably want to generate a list of bits in a first time rather than an integer. It should make further bit manipulation easier to handle.
For the second question, you should write a separate factorial function first and use it in your definition of e. Currently, you are mixing the sum computation and the factorial computation.
Given a coin n(<=10^9), I can exchange it for 3 coins:n/2,n/3 and n/4 (where / represents floor division). What is the greatest amount I can make? My code is:
#include <iostream>
using namespace std;
int a[10000000];
long int coin(long int n){
if(n<10000000){
return a[n];
}
else{
return(max(n,coin(n/2)+coin(n/3)+coin(n/4)));
}
}
int main()
{
//cout << "Hello World!" << endl;
long int n,ans;
int i;
a[0]=0;
for(i=1;i<10000000;i++){
a[i]=max(i,a[i/2]+a[i/3]+a[i/4]);
}
while(cin>>n){
if(n<10000000){
cout<<a[n]<<endl;
}
else{
ans=coin(n);
cout<<ans<<endl;
}
}
return 0;
}
How can I improve its time and space complexity?
Problem:https://www.hackerearth.com/problem/algorithm/bytelandian-gold-coins/description/
A few thoughts, no definite answer yet.
First, your approach is quite reasonable imo. You have numbers up to 10^9, which you cannot preprocess all. Instead, you take into account that the smaller numbers "somehow" are picked more often by the process, and so you memoize only up to a certain upper boundary, here 10^7.
An easy improvement in your basic algorithm is by realizing that you need to memoize only multiples of 2 or 3. All other inputs can easily be related to those numbers in the count function.
Another optimization could be to vary the upper bound 10^7 empirically. That is, choose some values between, say, 10^5 and 10^8 and then hand in the one with the minimum execution time.
Improving this basic approach is not trivial, but the way to improve it is by getting insight into the number selection procedure. Basically, one should memoize those numbers which are selected more often, and leave those numbers out which are picked only few times.
One could do a lot here, but usually the required results on which the memoization procedure is based have to be generated on-the-fly in the program which you hand in to the contest. I guess this makes it hard to come up with competitive solutions. I could imagine that simple rules of the form "memoize all below 10.000", "memoize multiples of 5 above 10.000", "memoize multiples of 7 above 10.000" and so on could be useful. Such rules can be easily encoded into the program without requiring too much memory. They could be found in advance by genetic algorithms, for example.
For an exact approach, one can assume a uniform distribution of the coin numbers in the problem. Then one can loop over all numbers i up to 10^9 and aquire how often each number k<i is chosen by the procedure. The result is an array count[i]. Next you pick a lower boundary L for count[i] and memoize all numbers i where count[i]>=L. However, as mentioned, this procedure is too costly as it has to be done in the run itself.
What you could do instead is to pick only, say, the N most-often picked numbers, and hard-code them in the code. The actual number N of included memoizaion numbers can be determined by the memory constraint in the task.
So I have this code. Not sure if it works because the runtime for the program is still continuing.
void permute(std::vector<std::string>& wordsVector, std::string prefix, int length, std::string alphabet) {
if (length == 0) {
//end the recursion
wordsVector.push_back(prefix);
}
else {
for (int i = 0; i < alphabet.length(); ++i) {
permute(wordsVector, prefix + alphabet.at(i), length - 1, alphabet);
}
}}
where I'm trying to get all combinations of characters in the English alphabet of a given length. I'm not sure if the approach is correct at the moment.
Alphabet consists of A-Z in a string of length 26. WordsVectors holds all the different combinations of words. prefix is meant to pass through recursively until a word is made and length is self explanatory.
Example, if I give the length of 7 to the function, I expect a size of 26 x 25 x 24 x 23 x 22 x 21 x 20 = 3315312000 if I'm correct, following the formula for permutations.
I don't think programs are meant to run this long so either I'm hitting an infinite loop or something is wrong with my approach. Please advise. Thanks.
Surely the stack would overflow but concentrating on your question even if you write an iterative program it will take a long time ( not an infinite loop just very long )
[26L, 650L, 15600L, 358800L, 7893600L, 165765600L, 3315312000L, 62990928000L, 1133836704000L, 19275223968000L, 308403583488000L, 4626053752320000L, 64764752532480000L, 841941782922240000L, 10103301395066880000L, 111136315345735680000L, 1111363153457356800000L, 10002268381116211200000L, 80018147048929689600000L, 560127029342507827200000L, 3360762176055046963200000L, 16803810880275234816000000L, 67215243521100939264000000L, 201645730563302817792000000L, 403291461126605635584000000L, 403291461126605635584000000L]
The above list is the number of possibilities for 1<=n<=26. You can see as n increases number of possibilities increases tremendously. Say you have 1GHz processor that does 10^9 operations per second. Say consider number of possibilities for n=26 its 403291461126605635584000000L. Its evident that if you sit down to list all possibilities its so so long ( so so many years ) that
you will feel it has hit an infinite loop. Finally I have not looked that closely into your code , but in nutshell even if you write it correctly,iteratively and don't store (again can't have this much memory) and just print all possibilities its going to take long time for larger values of n.
EDIT
As jaromax and others said if you just want to write it for smaller values of n,
say less than 10-12 you can write an iterative program to list/print them. It will run quite fast for small values. But if you also want to store them them then n will have to be say less than 5 say. (Really depends on how much RAM is available or you could find some permutations write them to disk, then depends on how much disk memory you can spare, again refer the number of possibilities list I posted above. It gives a rough idea of both time and space complexity).
I think there could be quite a problem that you do this on stack. A large part of the calculation you do recursively and this means every time allocated space for function.
Try to reformulate it linearly. I think I had such a problem before.
Your question implies you think there are 26x25x24x ... permutations
Your code doesn't have anything I can see to avoid "AAAAAAA" being a permutation, in which case there are 26x26x26x ...
So in addition to being a very complicated way of counting in base 26, I think it's also giving bad answers?
I've got a number which is less than 500,000,000 and I want to factorize it in an efficient way. What algorithm do you suggest? Note: I have a time limit of 0.01 sec!
I've just written this C++ code but it's absolutely awful!
void factorize(int x,vector<doubly> &factors)
{
for(int i=2;i<=x;i++)
{
if(x%i==0)
{
doubly a;
a.number=i;
a.power=0;
while(x%i==0)
{
a.power++;
x/=i;
}
factors.push_back(a);
}
}
}
and doubly is something like this:
struct doubly
{
int number;
int power;
//and some functions!!
};
just another point: I know that n is not a prime
As you might know, factorization is a hard problem. You might also know that you only have to test divisibility with primes. A small, but well known hint: You only have to test up to the square root of n. I leave the reasoning to you.
Look at the sieve of Eratosthenes. And maybe you find a hint in these questions and answers? How about that one?
If you want to make this faster even - without the full trade of in space/time of this answer - calculate all prime numbers up to square root of 500,000,000 in advance and put them into an array. Obviously this is broken when the upper limit grows ;)
Start to study the algorithms.
What is the fastest factorization algorithm?
Factorize all the integers up to 500,000,000 in advance (doesn't matter how) and store the factors in a database or fixed-length record format. Your lookup will be fast, and the database ought to fit onto a modern PC.
This is one end of the time/space tradeoff, but you didn't say what you're trying to optimize for.
Alternatively, look at the algorithm for GNU coreutils "factor".
You may try Pollard's rho heuristic, it's suitable for complex numbers with relatively small divisors:
Pollard's rho
If this is a homework assignment, I believe you should re-read your lecture material.
Anyway, you know your number is composite and very small, that's fine.
For a naive trial-division with all numbers, you need sqrt(500000000) tests at most - that's about 22360 times for worst-case. You can obviously skip even numbers since they're divisible with 2 (check that first). So then this becomes 11180 divisions for 0.01 s. If your computer can do 1.1 M divisions per second then you can just use the naive approach.
Or, you can make a list of primes off-line, up to sqrt(500M) and then trial-try each of those. This will cut down on divisions some more.
Or, if the factors are not too far away from each other, you could try Fermat's method.
If these won't work, you can try to use Pollard's rho and others.
Or, if this is not homework, restate the problem to work around the limitations (as some have suggested, can you precompute the factored numbers beforehand etc.).