to optimize the nested loops - c++

for( a=1; a <= 25; a++){
num1 = m[a];
for( b=1; b <= 25; b++){
num2 = m[b];
for( c=1; c <= 25; c++){
num3 = m[c];
for( d=1; d <= 25; d++){
num4 = m[d];
for( e=1; e <= 25; e++){
num5 = m[e];
for( f=1; f <= 25; f++){
num6 = m[f];
for( g=1; g <= 25; g++){
num7 = m[g];
for( h=1; h <= 25; h++){
num8 = m[h];
for( i=1; i <= 25; i++){
num = num1*100000000 + num2*10000000 +
num3* 1000000 + num4* 100000 +
num5* 10000 + num6* 1000 +
num7* 100 + num8* 10 + m[i];
check_prime = 1;
for ( y=2; y <= num/2; y++)
{
if ( num % y == 0 )
check_prime = 0;
}
if ( check_prime != 0 )
{
array[x++] = num;
}
num = 0;
}}}}}}}}}
The above code takes a hell lot of time to finish executing.. In fact it doesn't even finish executing, What can i do to optimize the loop and speed up the execution?? I am newbie to cpp.

Replace this code with code using a sensible algorithm, such as the Sieve of Eratosthenes. The most important "optimization" is choosing the right algorithm in the first place.
If your algorithm for sorting numbers is to swap them randomly until they're in order, it doesn't matter how much you optimize the selecting of the random entries, swapping them, or checking if they're in order. A bad algorithm will mean bad performance regardless.

You're checking 259 = 3,814,697,265,625 numbers whether they're prime. That's a lot of prime tests and will always take long. Even in the best case (for performance) when all array entries (in m) are 0 (never mind that the test considers 0 a prime), so that the trial division loop never runs, it will take hours to run. When all entries of m are positive, the code as is will run for hundreds or thousands of years, since then each number will be trial-divided by more than 50,000,000 numbers.
Looking at the prime check,
check_prime = 1;
for ( y = 2; y <= num/2; y++)
{
if ( num % y == 0 )
check_prime = 0;
}
the first glaring inefficiency is that the loop continues even after a divisor has been found and the compositeness of num established. Break out of the loop as soon as you know the outcome.
check_prime = 1;
for ( y = 2; y <= num/2; y++)
{
if ( num % y == 0 )
{
check_prime = 0;
break;
}
}
In the unfortunate case that all numbers you test are prime, that won't change a thing, but if all (or almost all, for sufficiently large values of almost) the numbers are composite, it will cut the running time by a factor of at least 5000.
The next thing is that you divide up to num/2. That is not necessary. Why do you stop at num/2, and not at num - 1? Well, because you figured out that the largest proper divisor of num cannot be larger than num/2 because if (num >) k > num/2, then 2*k > num and num is not a multiple of k.
That's good, not everybody sees that.
But you can pursue that train of thought further. If num/2 is a divisor of num, that means num = 2*(num/2) (using integer division, with the exception of num = 3). But then num is even, and its compositeness was already determined by the division by 2, so the division by num/2 will never be tried if it succeeds.
So what's the next possible candidate for the largest divisor that needs to be considered? num/3 of course. But if that's a divisor of num, then num = 3*(num/3) (unless num < 9) and the division by 3 has already settled the question.
Going on, if k < √num and num/k is a divisor of num, then num = k*(num/k) and we see that num has a smaller divisor, namely k (possibly even smaller ones).
So the smallest nontrivial divisor of num is less than or equal to √num. Thus the loop needs only run for y <= √num, or y*y <= num. If no divisor has been found in that range, num is prime.
Now the question arises whether to loop
for(y = 2; y*y <= num; ++y)
or
root = floor(sqrt(num));
for(y = 2; y <= root; ++y)
The first needs one multiplication for the loop condition in each iteration, the second one computation of the square root outside the loop.
Which is faster?
That depends on the average size of num and whether many are prime or not (more precisely, on the average size of the smallest prime divisor). Computing a square root takes much longer than a multiplication, to compensate that cost, the loop must run for many iterations (on average) - whether "many" means more than 20, more than 100 or more than 1000, say, depends. With num larger than 10^8, as is probably the case here, probably computing the square root is the better choice.
Now we have bounded the number of iterations of the trial division loop to √num whether num is composite or prime and reduced the running time by a factor of at least 5000 (assuming that all m[index] > 0, so that always num >= 10^8) regardless of how many primes are among the tested numbers. If most values num takes are composites with small prime factors, the reduction factor is much larger, to the extent that normally, the running time is almost completely used for testing primes.
Further improvement can be obtained by reducing the number of divisor candidates. If num is divisible by 4, 6, 8, ..., then it is also divisible by 2, so num % y never yields 0 for even y > 2. That means all these divisions are superfluous. By special casing 2 and incrementing the divisor candidate in steps of 2,
if (num % 2 == 0)
{
check_prime = 0;
} else {
root = floor(sqrt(num));
for(y = 3; y <= root; y += 2)
{
if (num % y == 0)
{
check_prime = 0;
break;
}
}
}
the number of divisions to perform and the running time is roughly halved (assuming enough bad cases that the work for even numbers is negligible).
Now, whenever y is a multiple of 3 (other than 3 itself), num % y will only be computed when num is not a multiple of 3, so these divisions are also superfluous. You can eliminate them by also special-casing 3 and letting y run through only the odd numbers that are not divisible by 3 (start with y = 5, increment by 2 and 4 alternatingly). That chops off roughly a third of the remaining work (if enough bad cases are present).
Continuing that elimination process, we need only divide num by the primes not exceeding √num to find whether it's prime or not.
So usually it would be a good idea to find the primes not exceeding the square root of the largest num you'll check, store them in an array and loop
root = floor(sqrt(num));
for(k = 0, y = primes[0]; k < prime_count && (y = primes[k]) <= root; ++k)
{
if (num % y == 0)
{
check_prime = 0;
break;
}
}
Unless the largest value num can take is small enough, if, for example, you'll always have num < 2^31, then you should find the primes to that limit in a bit-sieve so that you can look up whether num is prime in constant time (a sieve of 2^31 bits takes 256 MB, if you only have flags for the odd numbers [needs special-casing to check whether num is even], you only need 128 MB to check the primality of numbers < 2^31 in constant time, further reduction of required space for the sieve is possible).
So far for the prime test itself.
If the m array contains numbers divisible by 2 or by 5, it may be worthwhile to reorder the loops, have the loop for i the outermost, and skip the inner loops if m[i] is divisible by 2 or by 5 - all the other numbers are multiplied by powers of 10 before adding, so then num would be a multiple of 2 resp. 5 and not prime.
But, despite all that, it will still take long to run the code. Nine nested loops reek of a wrong design.
What is it that you try to do? Maybe we can help finding the correct design.

We can eliminate a lot of redundant calculations by calculating each part of the number as it becomes available. This also shows the trial division test for primality on 2-3 wheel up to the square root of a number:
// array m[] is assumed sorted in descending order NB!
// a macro to skip over the duplicate digits
#define I(x) while( x<25 && m[x+1]==m[x] ) ++x;
for( a=1; a <= 25; a++) {
num1 = m[a]*100000000;
for( b=1; b <= 25; b++) if (b != a) {
num2 = num1 + m[b]*10000000;
for( c=1; c <= 25; c++) if (c != b && c != a) {
num3 = num2 + m[c]*1000000;
for( d=1; d <= 25; d++) if (d!=c && d!=b && d!=a) {
num4 = num3 + m[d]*100000;
for( e=1; e <= 25; e++) if (e!=d && e!=c && e!=b && e!=a) {
num5 = num4 + m[e]*10000;
for( f=1; f <= 25; f++) if (f!=e&&f!=d&&f!=c&&f!=b&&f!=a) {
num6 = num5 + m[f]*1000;
limit = floor( sqrt( num6+1000 )); ///
for( g=1; g <= 25; g++) if (g!=f&&g!=e&&g!=d&&g!=c&&g!=b&&g!=a) {
num7 = num6 + m[g]*100;
for( h=1; h <= 25; h++) if (h!=g&&h!=f&&h!=e&&h!=d&&h!=c&&h!=b&&h!=a) {
num8 = num7 + m[h]*10;
for( i=1; i <= 25; i++) if (i!=h&&i!=g&&i!=f&&i!=e&&i!=d
&&i!=c&&i!=b&&i!=a) {
num = num8 + m[i];
if( num % 2 /= 0 && num % 3 /= 0 ) {
is_prime = 1;
for ( y=5; y <= limit; y+=6) {
if ( num % y == 0 ) { is_prime = 0; break; }
if ( num % (y+2) == 0 ) { is_prime = 0; break; }
}
if ( is_prime ) { return( num ); } // largest prime found
}I(i)}I(h)}I(g)}I(f)}I(e)}I(d)}I(c)}I(b)}I(a)}
This code also eliminates the duplicate indices. As you've indicated in the comments, you pick your numbers out of a 5x5 grid. That means that you must use all different indices. This will bring down the count of numbers to test from 25^9 = 3,814,697,265,625 to 25*24*23*...*17 = 741,354,768,000.
Since you've now indicated that all entries in the m[] array are less than 10, there certain to be duplicates, which need to be skipped when searching. As Daniel points out, searching from the top, the first found prime will be the biggest. This is achieved by pre-sorting the m[] array in descending order.

Related

How do i reduce the repeadly use of % operator for faster execution in C

This is code -
for (i = 1; i<=1000000 ; i++ ) {
for ( j = 1; j<= 1000000; j++ ) {
for ( k = 1; k<= 1000000; k++ ) {
if (i % j == k && j%k == 0)
count++;
}
}
}
or is it better to reduce any % operation that goes upto million times in any programme ??
edit- i am sorry ,
initialized by 0, let say i = 1 ok!
now, if i reduce the third loop as #darshan's answer then both the first
&& second loop can run upto N times
and also it calculating % , n*n times. ex- 2021 mod 2022 , then 2021 mod 2023..........and so on
so my question is- % modulus is twice (and maybe more) as heavy as compared to +, - so there's any other logic can be implemented here ?? which is alternate for this question. and gives the same answer as this logic will give..
Thank you so much for knowledgeable comments & help-
Question is:
3 integers (A,B,C) is considered to be special if it satisfies the
following properties for a given integer N :
A mod B=C
B mod C=0
1≤A,B,C≤N
I'm so curious if there is any other smartest solution which can greatly reduces time complexity.
A much Efficient code will be the below one , but I think it can be optimized much more.
First of all modulo (%) operator is quite expensive so try to avoid it on a large scale
for(i = 0; i<=1000000 ; i++ )
for( j = 0; j<= 1000000; j++ )
{
a = i%j;
for( k = 0; k <= j; k++ )
if (a == k && j%k == 0)
count++;
}
We placed a = i%j in second loop because there is no need for it to be calculated every time k changes as it is independent of k and for the condition j%k == 0 to be true , k should be <= j hence change looping restrictions
First of all, your code has undefined behavior due to division by zero: when k is zero then j%k is undefined, so I assume that all your loops should start with 1 and not 0.
Usually the % and the / operators are much slower to execute than any other operation. It is possible to get rid of most invocations of the % operators in your code by several simple steps.
First, look at the if line:
if (i % j == k && j%k == 0)
The i % j == k has a very strict constrain over k which plays into your hands. It means that it is pointless to iterate k at all, since there is only one value of k that passes this condition.
for (i = 1; i<=1000000 ; i++ ) {
for ( j = 1; j<= 1000000; j++ ) {
k = i % j;
// Constrain k to the range of the original loop.
if (k <= 1000000 && k > 0 && j%k == 0)
count++;
}
}
To get rid of "i % j" switch the loop. This change is possible since this code is affected only by which combinations of i,j are tested, not in the order in which they are introduced.
for ( j = 1; j<= 1000000; j++ ) {
for (i = 1; i<=1000000 ; i++ ) {
k = i % j;
// Constrain k to the range of the original loop.
if (k <= 1000000 && k > 0 && j%k == 0)
count++;
}
}
Here it is easy to observe how k behaves, and use that in order to iterate on k directly without iterating on i and so getting rid of i%j. k iterates from 1 to j-1 and then does it again and again. So all we have to do is to iterate over k directly in the loop of i. Note that i%j for j == 1 is always 0, and since k==0 does not pass the condition of the if we can safely start with j=2, skipping 1:
for ( j = 2; j<= 1000000; j++ ) {
for (i = 1, k=1; i<=1000000 ; i++, k++ ) {
if (k == j)
k = 0;
// Constrain k to the range of the original loop.
if (k <= 1000000 && k > 0 && j%k == 0)
count++;
}
}
This is still a waste to run j%k repeatedly for the same values of j,k (remember that k repeats several times in the inner loop). For example, for j=3 the values of i and k go {1,1}, {2,2}, {3,0}, {4,1}, {5,2},{6,0},..., {n*3, 0}, {n*3+1, 1}, {n*3+2, 2},... (for any value of n in the range 0 < n <= (1000000-2)/3).
The values beyond n= floor((1000000-2)/3)== 333332 are tricky - let's have a look. For this value of n, i=333332*3=999996 and k=0, so the last iteration of {i,k}: {n*3,0},{n*3+1,1},{n*3+2, 2} becomes {999996, 0}, {999997, 1}, {999998, 2}. You don't really need to iterate over all these values of n since each of them does exactly the same thing. All you have to do is to run it only once and multiply by the number of valid n values (which is 999996+1 in this case - adding 1 to include n=0).
Since that did not cover all elements, you need to continue the remainder of the values: {999999, 0}, {1000000, 1}. Notice that unlike other iterations, there is no third value, since it would set i out-of-range.
for (int j = 2; j<= 1000000; j++ ) {
if (j % 1000 == 0) std::cout << std::setprecision(2) << (double)j*100/1000000 << "% \r" << std::flush;
int innerCount = 0;
for (int k=1; k<j ; k++ ) {
if (j%k == 0)
innerCount++;
}
int innerLoopRepeats = 1000000/j;
count += innerCount * innerLoopRepeats;
// complete the remainder:
for (int k=1, i= j * innerLoopRepeats+1; i <= 1000000 ; k++, i++ ) {
if (j%k == 0)
count++;
}
}
This is still extremely slow, but at least it completes in less than a day.
It is possible to have a further speed up by using an important property of divisibility.
Consider the first inner loop (it's almost the same for the second inner loop),
and notice that it does a lot of redundant work, and does it expensively.
Namely, if j%k==0, it means that k divides j and that there is pairK such that pairK*k==j.
It is trivial to calculate the pair of k: pairK=j/k.
Obviously, for k > sqrt(j) there is pairK < sqrt(j). This implies that any k > sqrt(j) can be extracted simply
by scanning all k < sqrt(j). This feature lets you loop over only a square root of all interesting values of k.
By searching only for sqrt(j) values gives a huge performance boost, and the whole program can finish in seconds.
Here is a view of the second inner loop:
// complete the remainder:
for (int k=1, i= j * innerLoopRepeats+1; i <= 1000000 && k*k <= j; k++, i++ ) {
if (j%k == 0)
{
count++;
int pairI = j * innerLoopRepeats + j / k;
if (pairI != i && pairI <= 1000000) {
count++;
}
}
}
The first inner loop has to go over a similar transformation.
Just reorder indexation and calculate A based on constraints:
void findAllSpecial(int N, void (*f)(int A, int B, int C))
{
// 1 ≤ A,B,C ≤ N
for (int C = 1; C < N; ++C) {
// B mod C = 0
for (int B = C; B < N; B += C) {
// A mod B = C
for (int A = C; A < N; A += B) {
f(A, B, C);
}
}
}
}
No divisions not useless if just for loops and adding operations.
Below is the obvious optimization:
The 3rd loop with 'k' is really not needed as there is already a many to One mapping from (I,j) -> k
What I understand from the code is that you want to calculate the number of (i,j) pairs such that the (i%j) is a factor of j. Is this correct or am I missing something?

Speed problem for summation (sum of divisors)

I should implement this summation in C ++. I have tried with this code, but with very high numbers up to 10 ^ 12 it takes too long.
The summation is:
For any positive integer k, let d(k) denote the number of positive divisors of k (including 1 and k itself).
For example, for the number 4: 1 has 1 divisor, 2 has two divisors, 3 has two divisors, and 4 has three divisors. So the result would be 8.
This is my code:
#include <iostream>
#include <algorithm>
using namespace std;
int findDivisors(long long n)
{
int c=0;
for(int j=1;j*j<=n;j++)
{
if(n%j==0)
{
c++;
if(j!=(n/j))
{
c++;
}
}
}
return c;
}
long long compute(long long n)
{
long long sum=0;
for(int i=1; i<=n; i++)
{
sum += (findDivisors(i));
}
return sum;
}
int main()
{
int n, divisors;
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
cin >> n;
cout << compute(n);
}
I think it's not just a simple optimization problem, but maybe I should change the algorithm entirely.
Would anyone have any ideas to speed it up? Thank you.
largest_prime_is_463035818's answer shows an O(N) solution, but the OP is trying to solve this problem
with very high numbers up to 1012.
The following is an O(N1/2) algorithm, based on some observations about the sum
n/1 + n/2 + n/3 + ... + n/n
In particular, we can count the number of terms with a specific value.
Consider all the terms n/k where k > n/2. There are n/2 of those and all are equal to 1 (integer division), so that their sum is n/2.
Similar considerations hold for the other dividends, so that we can write the following function
long long count_divisors(long long n)
{
auto sum{ n };
for (auto i{ 1ll }, k_old{ n }, k{ n }; i < k ; ++i, k_old = k)
{ // ^^^^^ it goes up to sqrt(n)
k = n / (i + 1);
sum += (k_old - k) * i;
if (i == k)
break;
sum += k;
}
return sum;
}
Here it is tested against the O(N) algorithm, the only difference in the results beeing the corner cases n = 0 and n = 1.
Edit
Thanks again to largest_prime_is_463035818, who linked the Wikipedia page about the divisor summatory function, where both an O(N) and an O(sqrt(N)) algorithm are mentioned.
An implementation of the latter may look like this
auto divisor_summatory(long long n)
{
auto sum{ 0ll };
auto k{ 1ll };
for ( ; k <= n / k; ++k )
{
sum += n / k;
}
--k;
return 2 * sum - k * k;
}
They also add this statement:
Finding a closed form for this summed expression seems to be beyond the techniques available, but it is possible to give approximations. The leading behavior of the series is given by
D(x) = xlogx + x(2γ - 1) + Δ(x)
where γ is the Euler–Mascheroni constant, and the error term is Δ(x) = O(sqrt(x)).
I used your brute force approach as reference to have test cases. The ones I used are
compute(12) == 35
cpmpute(100) == 482
Don't get confused by computing factorizations. There are some tricks one can play when factorizing numbers, but you actually don't need any of that. The solution is a plain simple O(N) loop:
#include <iostream>
#include <limits>
long long compute(long long n){
long long sum = n+1;
for (long long i=2; i < n ; ++i){
sum += n/i;
}
return sum;
}
int main()
{
std::cout << compute(12) << "\n";
std::cout << compute(100) << "\n";
}
Output:
35
482
Why does this work?
The key is in Marc Glisse's comment:
As often with this kind of problem, this sum actually counts pairs x,
y where x divides y, and the sum is arranged to count first all x
corresponding to a fixed y, but nothing says you have to keep it that
way.
I could stop here, because the comment already explains it all. Though, if it didn't click yet...
The trick is to realize that it is much simpler to count divisors of all numbers up to n rather than n-times counting divisors of individual numbers and take the sum.
You don't need to care about factorizations of eg 123123123 or 52323423 to count all divisors up to 10000000000. All you need is a change of perspective. Instead of trying to factorize numbers, consider the divisors. How often does the divisor 1 appear up to n? Simple: n-times. How often does the divisor 2 appear? Still simple: n/2 times, because every second number is divisible by 2. Divisor 3? Every 3rd number is divisible by 3. I hope you can see the pattern already.
You could even reduce the loop to only loop till n/2, because bigger numbers obviously appear only once as divisor. Though I didn't bother to go further, because the biggest change is from your O(N * sqrt(N)) to O(N).
Let's start off with some math and reduce the O(n * sq(n)) factorization to O(n * log(log(n))) and for counting the sum of divisors the overall complexity is O(n * log(log(n)) + n * n^(1/3)).
For instance:
In Codeforces himanshujaju explains how we can optimize the solution of finding divisors of a number.
I am simplifying it a little bit.
Let, n as the product of three numbers p, q, and r.
so assume p * q * r = n, where p <= q <= r.
The maximum value of p = n^(1/3).
Now we can loop over all prime numbers in a range [2, n^(1/3)]
and try to reduce the time complexity of prime factorization.
We will split our number n into two numbers x and y => x * y = n.
And x contains prime factors up to n^(1/3) and y deals with higher prime factors greater than n^(1/3).
Thus gcd(x, y) = 1.
Now define F(n) as the number of prime factors of n.
From multiplicative rules, we can say that
F(x * y) = F(x) * F(y), if gcd(x, y) = 1.
For finding F(n) => F(x * y) = F(x) * F(y)
So first find F(x) then F(y) will F(n/x)
And there will 3 cases to cover for y:
1. y is a prime number: F(y) = 2.
2. y is the square of a prime number: F(y) = 3.
3. y is a product of two distinct prime numbers: F(y) = 4.
So once we are done with finding F(x) and F(y), we are also done with finding F(x * y) or F(n).
In Cp-Algorithm there is also a nice explanation of how to count the number of divisors on a number. And also in GeeksForGeeks a nice coding example of how to count the number of divisors of a number in an efficient way. One can check the articles and can generate a nice solution to this problem.
C++ implementation
#include <bits/stdc++.h>
using namespace std;
const int maxn = 1e6 + 11;
bool prime[maxn];
bool primesquare[maxn];
int table[maxn]; // for storing primes
void SieveOfEratosthenes()
{
for(int i = 2; i < maxn; i++){
prime[i] = true;
}
for(int i = 0; i < maxn; i++){
primesquare[i] = false;
}
// 1 is not a prime number
prime[1] = false;
for(int p = 2; p * p < maxn; p++){
// If prime[p] is not changed, then
// it is a prime
if(prime[p] == true){
// Update all multiples of p
for(int i = p * 2; i < maxn; i += p){
prime[i] = false;
}
}
}
int j = 0;
for(int p = 2; p < maxn; p++) {
if (prime[p]) {
// Storing primes in an array
table[j] = p;
// Update value in primesquare[p * p],
// if p is prime.
if(p < maxn / p) primesquare[p * p] = true;
j++;
}
}
}
// Function to count divisors
int countDivisors(int n)
{
// If number is 1, then it will have only 1
// as a factor. So, total factors will be 1.
if (n == 1)
return 1;
// ans will contain total number of distinct
// divisors
int ans = 1;
// Loop for counting factors of n
for(int i = 0;; i++){
// table[i] is not less than cube root n
if(table[i] * table[i] * table[i] > n)
break;
// Calculating power of table[i] in n.
int cnt = 1; // cnt is power of prime table[i] in n.
while (n % table[i] == 0){ // if table[i] is a factor of n
n = n / table[i];
cnt = cnt + 1; // incrementing power
}
// Calculating the number of divisors
// If n = a^p * b^q then total divisors of n
// are (p+1)*(q+1)
ans = ans * cnt;
}
// if table[i] is greater than cube root of n
// First case
if (prime[n])
ans = ans * 2;
// Second case
else if (primesquare[n])
ans = ans * 3;
// Third case
else if (n != 1)
ans = ans * 4;
return ans; // Total divisors
}
int main()
{
SieveOfEratosthenes();
int sum = 0;
int n = 5;
for(int i = 1; i <= n; i++){
sum += countDivisors(i);
}
cout << sum << endl;
return 0;
}
Output
n = 4 => 8
n = 5 => 10
Complexity
Time complexity: O(n * log(log(n)) + n * n^(1/3))
Space complexity: O(n)
Thanks, #largest_prime_is_463035818 for pointing out my mistake.

How to find divisor to maximise remainder?

Given two numbers n and k, find x, 1 <= x <= k that maximises the remainder n % x.
For example, n = 20 and k = 10 the solution is x = 7 because the remainder 20 % 7 = 6 is maximum.
My solution to this is :
int n, k;
cin >> n >> k;
int max = 0;
for(int i = 1; i <= k; ++i)
{
int xx = n - (n / i) * i; // or int xx = n % i;
if(max < xx)
max = xx;
}
cout << max << endl;
But my solution is O(k). Is there any more efficient solution to this?
Not asymptotically faster, but faster, simply by going backwards and stopping when you know that you cannot do better.
Assume k is less than n (otherwise just output k).
int max = 0;
for(int i = k; i > 0 ; --i)
{
int xx = n - (n / i) * i; // or int xx = n % i;
if(max < xx)
max = xx;
if (i < max)
break; // all remaining values will be smaller than max, so break out!
}
cout << max << endl;
(This can be further improved by doing the for loop as long as i > max, thus eliminating one conditional statement, but I wrote it this way to make it more obvious)
Also, check Garey and Johnson's Computers and Intractability book to make sure this is not NP-Complete (I am sure I remember some problem in that book that looks a lot like this). I'd do that before investing too much effort on trying to come up with better solutions.
This problem is equivalent to finding maximum of function f(x)=n%x in given range. Let's see how this function looks like:
It is obvious that we could get the maximum sooner if we start with x=k and then decrease x while it makes any sense (until x=max+1). Also this diagram shows that for x larger than sqrt(n) we don't need to decrease x sequentially. Instead we could jump immediately to preceding local maximum.
int maxmod(const int n, int k)
{
int max = 0;
while (k > max + 1 && k > 4.0 * std::sqrt(n))
{
max = std::max(max, n % k);
k = std::min(k - 1, 1 + n / (1 + n / k));
}
for (; k > max + 1; --k)
max = std::max(max, n % k);
return max;
}
Magic constant 4.0 allows to improve performance by decreasing number of iterations of the first (expensive) loop.
Worst case time complexity could be estimated as O(min(k, sqrt(n))). But for large enough k this estimation is probably too pessimistic: we could find maximum much sooner, and if k is significantly greater than sqrt(n) we need only 1 or 2 iterations to find it.
I did some tests to determine how many iterations are needed in the worst case for different values of n:
n max.iterations (both/loop1/loop2)
10^1..10^2 11 2 11
10^2..10^3 20 3 20
10^3..10^4 42 5 42
10^4..10^5 94 11 94
10^5..10^6 196 23 196
up to 10^7 379 43 379
up to 10^8 722 83 722
up to 10^9 1269 157 1269
Growth rate is noticeably better than O(sqrt(n)).
For k > n the problem is trivial (take x = n+1).
For k < n, think about the graph of remainders n % x. It looks the same for all n: the remainders fall to zero at the harmonics of n: n/2, n/3, n/4, after which they jump up, then smoothly decrease towards the next harmonic.
The solution is the rightmost local maximum below k. As a formula x = n//((n//k)+1)+1 (where // is integer division).
waves hands around
No value of x which is a factor of n can produce the maximum n%x, since if x is a factor of n then n%x=0.
Therefore, you would like a procedure which avoids considering any x that is a factor of n. But this means you want an easy way to know if x is a factor. If that were possible you would be able to do an easy prime factorization.
Since there is not a known easy way to do prime factorization there cannot be an "easy" way to solve your problem (I don't think you're going to find a single formula, some kind of search will be necessary).
That said, the prime factorization literature has cunning ways of getting factors quickly relative to a naive search, so perhaps it can be leveraged to answer your question.
Nice little puzzle!
Starting with the two trivial cases.
for n < k: any x s.t. n < x <= k solves.
for n = k: x = floor(k / 2) + 1 solves.
My attempts.
for n > k:
x = n
while (x > k) {
x = ceil(n / 2)
}
^---- Did not work.
x = floor(float(n) / (floor(float(n) / k) + 1)) + 1
x = ceil(float(n) / (floor(float(n) / k) + 1)) - 1
^---- "Close" (whatever that means), but did not work.
My pride inclines me to mention that I was first to utilize the greatest k-bounded harmonic, given by 1.
Solution.
Inline with other answers I simply check harmonics (term courtesy of #ColonelPanic) of n less than k, limiting by the present maximum value (courtesy of #TheGreatContini). This is the best of both worlds and I've tested with random integers between 0 and 10000000 with success.
int maximalModulus(int n, int k) {
if (n < k) {
return n;
}
else if (n == k) {
return n % (k / 2 + 1);
}
else {
int max = -1;
int i = (n / k) + 1;
int x = 1;
while (x > max + 1) {
x = (n / i) + 1;
if (n%x > max) {
max = n%x;
}
++i;
}
return max;
}
}
Performance tests:
http://cpp.sh/72q6
Sample output:
Average number of loops:
bruteForce: 516
theGreatContini: 242.8
evgenyKluev: 2.28
maximalModulus: 1.36 // My solution
I'm wrong for sure, but it looks to me that it depends on if n < k or not.
I mean, if n < k, n%(n+1) gives you the maximum, so x = (n+1).
Well, on the other hand, you can start from j = k and go back evaluating n%j until it's equal to n, thus x = j is what you are looking for and you'll get it in max k steps... Too much, is it?
Okay, we want to know divisor that gives maximum remainder;
let n be a number to be divided and i be the divisor.
we are interested to find the maximum remainder when n is divided by i, for all i<n.
we know that, remainder = n - (n/i) * i //equivalent to n%i
If we observe the above equation to get maximum remainder we have to minimize (n/i)*i
minimum of n/i for any i<n is 1.
Note that, n/i == 1, for i<n, if and only if i>n/2
now we have, i>n/2.
The least possible value greater than n/2 is n/2+1.
Therefore, the divisor that gives maximum remainder, i = n/2+1
Here is the code in C++
#include <iostream>
using namespace std;
int maxRemainderDivisor(int n){
n = n>>1;
return n+1;
}
int main(){
int n;
cin>>n;
cout<<maxRemainderDivisor(n)<<endl;
return 0;
}
Time complexity: O(1)

How to make Random Numbers unique

I am making a random number generator. It asks how many digits the user wants to be in the number. for example it they enter 2 it will generate random numbers between 10 and 99. I have made the generator but my issue is that the numbers are not unique.
Here is my code. I am not sure why it is not generating unique number. I thought srand(time(null)) would do it.
void TargetGen::randomNumberGen()
{
srand (time(NULL));
if (intLength == 1)
{
for (int i = 0; i< intQuantity; i++)
{
int min = 1;
int max = 9;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
else if (intLength == 2)
{
for (int i = 0; i<intQuantity; i++)
{
int min = 10;
int max = 90;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
if (intLength == 3)
{
for (int i = 0; i<intQuantity; i++)
{
int min = 100;
int max = 900;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
else if (intLength == 4)
{
for (int i = 0; i<intQuantity; i++)
{
int min = 1000;
int max = 9000;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
if (intLength == 5)
{
for (int i = 0; i<intQuantity; i++)
{
int min = 10000;
int max = 90000;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
else if (intLength == 6)
{
for (int i = 0; i<intQuantity; i++)
{
int min = 100000;
int max = 900000;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
if (intLength == 7)
{
for (int i = 0; i<intQuantity; i++)
{
int min = 1000000;
int max = 9000000;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
else if (intLength == 8)
{
for (int i = 0; i <intQuantity; i++)
{
int min = 10000000;
int max = 89999999;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
if (intLength == 9)
{
for (int i = 0; i < intQuantity; i++)
{
int min = 100000000;
int max = 900000000;
int number1 = rand();
if (intQuantity > max)
{
intQuantity = max;
}
cout << number1 % max + min << "\t";
}
}
}
Okay so I thought I figured out a way to do this without arrays but It isn't working before I switch to the fisher yates method. Can someone tell me why this isn't working? It is supposed to essentially take the random number put that into variable numGen. Then in variable b = to numgen. Just to hold what numGen used to be so when the loop goes through and generates another random number it will compare it to what the old number is and if it is not equal to it, then it will output it. If it is equal to the old number than rather than outputting it, it will deincrement i so that it will run through the loop without skipping over the number entirely. However, when I do this is infinitely loops. And I am not sure why.
if (intLength == 1)
{
for (int i = 0; i< intQuantity; ++i)
{
int min = 1;
int max = 9;
int number1 = rand();
int numGen = number1 % max + min;
if (intQuantity > max)
{
intQuantity = max;
}
for (int k = 0; k < 1; k++)
{
cout << numGen << "\t";
int b = numGen;
}
int b = numGen;
if (b != numGen )
{
cout << numGen << "\t";
}
else
{
i--;
}
}
}
Everyone has interesting expectations for random numbers -- apparently, you expect random numbers to be unique! If you use any good random number generator, your random numbers will never be guaranteed to be unique.
To make this most obvious, if you wanted to generate random numbers in the range [1, 2], and you were to generate two numbers, you would (normally expect to) get one of the following four possibilities with equal probability:
1, 2
2, 1
1, 1
2, 2
It does not make sense to ask a good random number generator to generate the first two, but not the last two.
Now, take a second to think what to expect if you asked to generate three numbers in the same range... 1, 2, then what??
Uniqueness, therefore, is not, and will not be a property of a random number generator.
Your specific problem may require uniqueness, though. In this case, you need to do some additional work to ensure uniqueness.
One way is to keep a tab on which numbers are already picked. You can keep them in a set, and re-pick if you get one you got earlier. However, this is effective only if you pick a small set of numbers compared to your range; if you pick most of the range, the end of the process gets ineffective.
If the number count you are going to pick corresponds to most of the range, then using an array of the range, and the using a good shuffling algorithm to shuffle the numbers around is a better solution. (The Fisher-Yates shuffle should do the trick.)
Hint 0:
Use Quadratic residue from number theory; an integer q is called a quadratic residue modulo p if it is congruent to a perfect square modulo p; i.e., if there exists an integer x such that:
x2 ≡ q (mod p)
Hint 1:
Theorem: Assuming p is a prime number, the quadratic residue of x is unique as long as 2x < p. For example:
02 ≡ 0 (mod 13)
12 ≡ 1 (mod 13)
22 ≡ 4 (mod 13)
32 ≡ 9 (mod 13)
42 ≡ 3 (mod 13)
52 ≡ 12 (mod 13)
62 ≡ 10 (mod 13)
Hint 2:
Theorem: Assuming p is a prime number such that p ≡ 3 (mod 4), not only x2%p (i.e the quadratic residue) is unique for 2x < p but p - x2%p is also unique for 2x>p. For example:
02%11 = 0
12%11 = 1
22%11 = 4
32%11 = 9
42%11 = 5
52%11 = 3
11 - 62%11 = 8
11 - 72%11 = 6
11 - 82%11 = 2
11 - 92%11 = 7
11 - 102%11 = 10
Thus, this method provides us with a perfect 1-to-1 permutation on the integers less than p, where p can be any prime such that p ≡ 3 (mod 4).
Hint 3:
unsigned int UniqueRandomMapping(unsigned int x)
{
const unsigned int p = 11; //any prime number satisfying p ≡ 3 (mod 4)
unsigned int r = ((unsigned long long) x * x) % p;
if (x <= p / 2) return r;
else return p - r;
}
I didn't worry about the bad input numbers (e.g. out of the range).
Remarks
For 32-bit integers, you may choose the largest prime number such that p ≡ 3 (mod 4) which is less than 232 which is 4294967291.
Even though, this method gives you a 1-to-1 mapping for generating random number, it suffers from the clustering issue.
To improve the randomness of the aforementioned method, combine it with
other unique random mapping methods such as XOR operator.
I'll assume you can come up with a way to figure out how many numbers you want to use. It's pretty simple, since a user input of 2 goes to 10-99, 3 is 100-999, etc.
If you want to come up with your own implementation of unique, randomly generated numbers, check out these links.
http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle
Here is a very similar implementation: https://stackoverflow.com/a/196065/2142219
In essence, you're creating an array of X integers, all set to the value of their index. You randomly select an index between 0 and MAX, taking the value at this index and swapping it with the max value. MAX is then decremented by 1 and you can repeat it by randomly selecting an index between 0 and MAX - 1.
This gives you a random array of 0-999 integers with no duplicates.
Here are two possible approaches to generating unique random numbers in a range.
Keep track of which numbers you have already generated using std::set, and throw away and regenerate numbers as long as they are already in the set. This approach is not recommended if you want to generate a large number of random numbers, due to the birthday paradox.
Generate all numbers in your given range, take a random permutation of them, and output however many the user wants.
Standard random generators would never generate unique numbers, in this case they would Not be independent.
To generate unique numbers you have to:
Save all number generated and compare new one with old ones, if there is coincidence - regenerate.
or
Use random_shuffle function: http://en.cppreference.com/w/cpp/algorithm/random_shuffle to get all sequence in advance.
Firstly, srand()/rand() commonly have a period of 2^32, which means that after calling srand(), rand() will internally iterate over distinct integers during the first 2^32 calls to rand(). Still, rand() may well return a result with less than 32 bits: such as an int between 0 and RAND_MAX where RAND_MAX is 2^31-1 or 2^15-1, so you may see repeated results as the caller of rand(). You probably read about the period though, or somebody's comment made with awareness of that, and somehow it's been mistaken as uniqueness....
Secondly, given any call to rand() generates a number far larger than you want, and you're doing this...
number1 % max
The result of "number1 % max" is in the range 0 <= N <= max, but the random number itself may have been any multiple of max greater than that. In other words, two distinct random numbers that differ by a multiple of max still produce the same result for number1 % max in your program.
To get distinct random numbers within a range, you could prepopulate a std::vector with all the numbers, then std::shuffle them.

Triangle numbers problem....show within 4 seconds

The sequence of triangle numbers is
generated by adding the natural
numbers. So the 7th triangle number
would be 1 + 2 + 3 + 4 + 5 + 6 + 7 =
28. The first ten terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55,
...
Let us list the factors of the first
seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first
triangle number to have over five
divisors.
Given an integer n, display the first
triangle number having at least n
divisors.
Sample Input: 5
Output 28
Input Constraints: 1<=n<=320
I was obviously able to do this question, but I used a naive algorithm:
Get n.
Find triangle numbers and check their number of factors using the mod operator.
But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.
Here's a hint:
The number of divisors according to the Divisor function is the product of the power of each prime factor plus 1. For example, let's consider the exponential prime representation of 28:
28 = 22 * 30 * 50 * 71 * 110...
The product of each exponent plus one is: (2+1)*(0+1)*(0+1)*(1+1)*(0+1)... = 6, and sure enough, 28 has 6 divisors.
Now, consider that the nth triangular number can be computed in closed form as n(n+1)/2. We can multiply numbers written in the exponential prime form simply by adding up the exponents at each position. Dividing by two just means decrementing the exponent on the two's place.
Do you see where I'm going with this?
Well, you don't go into a lot of detail about what you did, but I can give you an optimization that can be used, if you didn't think of it...
If you're using the straightforward method of trying to find factors of a number n, by using the mod operator, you don't need to check all the numbers < n. That obviously would take n comparisons...you can just go up to floor(sqrt(n)). For each factor you find, just divide n by that number, and you'll get the conjugate value, and not need to find it manually.
For example: say n is 15.
We loop, and try 1 first. Yep, the mod checks out, so it's a factor. We divide n by the factor to get the conjugate value, so we do (15 / 1) = 15...so 15 is a factor.
We try 2 next. Nope. Then 3. Yep, which also gives us (15 / 3) = 5.
And we're done, because 4 is > floor(sqrt(n)). Quick!
If you didn't think of it, that might be something you could leverage to improve your times...overall you go from O(n) to O(sqrt (n)) which is pretty good (though for numbers this small, constants may still weigh heavily.)
I was in a programming competition way back in school where there was some similar question with a run time limit. the team that "solved" it did as follows:
1) solve it with a brute force slow method.
2) write a program to just print out the answer (you found using the slow method), which will run sub second.
I thought this was bogus, but they won.
see Triangular numbers: a(n) = C(n+1,2) = n(n+1)/2 = 0+1+2+...+n. (Formerly M2535 N1002)
then pick the language you want implement it in, see this:
"... Python
import math
def diminishing_returns(val, scale):
if val < 0:
return -diminishing_returns(-val, scale)
mult = val / float(scale)
trinum = (math.sqrt(8.0 * mult + 1.0) - 1.0) / 2.0
return trinum * scale
..."
First, create table with two columns: Triangle_Number Count_of_Factors.
Second, derive from this a table with the same columns, but consisting only of the 320 rows of the lowest triangle number with a distinct number of factors.
Perform your speedy lookup to the second table.
If you solved the problem, you should be able to access the thread on Project Euler in which people post their (some very efficient) solutions.
If you're going to copy and paste a problem, please cite the source (unless it was your teacher who stole it); and I second Wouter van Niferick's comment.
Well, at least you got a good professor. Performance is important.
Since you have a program that can do the job, you can precalculate all of the answers for 1 .. 320.
Store them in an array, then simply subscript into the array to get the answer. That will be very fast.
Compile with care, winner of worst code of the year :D
#include <iostream>
bool isPrime( unsigned long long number ){
if( number != 2 && number % 2 == 0 )
return false;
for( int i = 3;
i < static_cast<unsigned long long>
( sqrt(static_cast<double>(number)) + 1 )
; i += 2 ){
if( number % i == 0 )
return false;
}
return true;
}
unsigned int p;
unsigned long long primes[1024];
void initPrimes(){
primes[0] = 2;
primes[1] = 3;
unsigned long long number = 5;
for( unsigned int i = 2; i < 1024; i++ ){
while( !isPrime(number) )
number += 2;
primes[i] = number;
number += 2;
}
return;
}
unsigned long long nextPrime(){
unsigned int ret = p;
p++;
return primes[ret];
}
unsigned long long numOfDivs( unsigned long long number ){
p = 0;
std::vector<unsigned long long> v;
unsigned long long prime = nextPrime(), divs = 1, i = 0;
while( number >= prime ){
i = 0;
while( number % prime == 0 ){
number /= prime;
i++;
}
if( i )
v.push_back( i );
prime = nextPrime();
}
for( unsigned n = 0; n < v.size(); n++ )
divs *= (v[n] + 1);
return divs;
}
unsigned long long nextTriNumber(){
static unsigned long long triNumber = 1, next = 2;
unsigned long long retTri = triNumber;
triNumber += next;
next++;
return retTri;
}
int main()
{
initPrimes();
unsigned long long n = nextTriNumber();
unsigned long long divs = 500;
while( numOfDivs(n) <= divs )
n = nextTriNumber();
std::cout << n;
std::cin.get();
}
def first_triangle_number_with_over_N_divisors(N):
n = 4
primes = [2, 3]
fact = [None, None, {2:1}, {3:1}]
def num_divisors (x):
num = 1
for mul in fact[x].values():
num *= (mul+1)
return num
while True:
factn = {}
for p in primes:
if p > n//2: break
r = n // p
if r * p == n:
factn = fact[r].copy()
factn[p] = factn.get(p,0) + 1
if len(factn)==0:
primes.append(n)
factn[n] = 1
fact.append(factn)
(x, y) = (n-1, n//2) if n % 2 == 0 else (n, (n-1)//2)
numdiv = num_divisors(x) * num_divisors(y)
if numdiv >= N:
print('Triangle number %d: %d divisors'
%(x*y, numdiv))
break
n += 1
>>> first_triangle_number_with_over_N_divisors(500)
Triangle number 76576500: 576 divisors
Dude here is ur code, go have a look. It calculates the first number that has divisors greater than 500.
void main() {
long long divisors = 0;
long long nat_num = 0;
long long tri_num = 0;
int tri_sqrt = 0;
while (1) {
divisors = 0;
nat_num++;
tri_num = nat_num + tri_num;
tri_sqrt = floor(sqrt((double)tri_num));
long long i = 0;
for ( i=tri_sqrt; i>=1; i--) {
long long remainder = tri_num % i;
if ( remainder == 0 && tri_num == 1 ) {
divisors++;
}
else if (remainder == 0 && tri_num != 1) {
divisors++;
divisors++;
}
}
if (divisors >100) {
cout <<"No. of divisors: "<<divisors<<endl<<tri_num<<endl;
}
if (divisors > 500)
break;
}
cout<<"Final Result: "<<tri_num<<endl;
system("pause");
}
Boojum's answer motivated me to write this little program. It seems to work well, although it does use a brute force method of computing primes. It's neat how all the natural numbers can be broken down into prime number components.
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <iomanip>
#include <vector>
//////////////////////////////////////////////////////////////////////////////
typedef std::vector<size_t> uint_vector;
//////////////////////////////////////////////////////////////////////////////
// add a prime number to primes[]
void
primeAdd(uint_vector& primes)
{
size_t n;
if (primes.empty())
{
primes.push_back(2);
return;
}
for (n = *(--primes.end()) + 1; ; ++n)
{
// n is even -> not prime
if ((n & 1) == 0) continue;
// look for a divisor in [2,n)
for (size_t i = 2; i < n; ++i)
{
if ((n % i) == 0) continue;
}
// found a prime
break;
}
primes.push_back(n);
}
//////////////////////////////////////////////////////////////////////////////
void
primeFactorize(size_t n, uint_vector& primes, uint_vector& f)
{
f.clear();
for (size_t i = 0; n > 1; ++i)
{
while (primes.size() <= i) primeAdd(primes);
while (f.size() <= i) f.push_back(0);
while ((n % primes[i]) == 0)
{
++f[i];
n /= primes[i];
}
}
}
//////////////////////////////////////////////////////////////////////////////
int
main(int argc, char** argv)
{
// allow specifying number of TN's to be evaluated
size_t lim = 1000;
if (argc > 1)
{
lim = atoi(argv[1]);
}
if (lim == 0) lim = 1000;
// prime numbers
uint_vector primes;
// factors of (n), (n + 1)
uint_vector* f = new uint_vector();
uint_vector* f1 = new uint_vector();
// sum vector
uint_vector sum;
// prime factorize (n)
size_t n = 1;
primeFactorize(n, primes, *f);
// iterate over triangle-numbers
for (; n <= lim; ++n)
{
// prime factorize (n + 1)
primeFactorize(n + 1, primes, *f1);
while (f->size() < f1->size()) f->push_back(0);
while (f1->size() < f->size()) f1->push_back(0);
size_t numTerms = f->size();
// compute prime factors for (n * (n + 1) / 2)
sum.clear();
size_t i;
for (i = 0; i < numTerms; ++i)
{
sum.push_back((*f)[i] + (*f1)[i]);
}
--sum[0];
size_t numFactors = 1, tn = 1;
for (i = 0; i < numTerms; ++i)
{
size_t exp = sum[i];
numFactors *= (exp + 1);
while (exp-- != 0) tn *= primes[i];
}
std::cout
<< n << ". Triangle number "
<< tn << " has " << numFactors << " factors."
<< std::endl;
// prepare for next iteration
f->clear();
uint_vector* tmp = f;
f = f1;
f1 = tmp;
}
delete f;
delete f1;
return 0;
}