There is a number N
every iteration it becomes equal to (N*2)-1
I need to find out how many steps the number will be a multiple of the original N;
( 1≤ N ≤ 2 · 10 9 )
For example:
N = 7; count = 0
N_ = 7*2-1 = 13; count = 1; N_ % N != 0
N_ = 13*2-1 = 25; count = 2; N_ % N != 0
N_ = 25*2-1 = 49; count = 3; N_ % N == 0
Answer is 3
if it is impossible to decompose in this way, then output -1
#include <iostream>
using namespace std;
int main(){
int N,M,c;
cin >> N;
if (N%2==0) {
cout << -1;
return 0;
}
M = N*2-1;
c = 1;
while (M%N!=0){
c+=1;
M=M*2-1;
}
cout << c;
return 0;
}
It does not fit during (1 second limit). How to optimize the algorithm?
P.S All the answers indicated are optimized, but they don’t fit in 1 second, because you need to change the algorithm in principle. The solution was to use Euler's theorem.
The problem, as other answers have suggested, is equivalent to finding c such that pow(2, c) = 1 mod N. This is impossible if N is even, and possible otherwise (as your code suggests you know).
A linear-time approach is:
int c = 1;
uint64_t m = 2;
while (m != 1){
c += 1;
m = (2*m)%N;
}
printf("%d\n", c);
To solve this in 1 second, I don't think you can use a linear-time algorithm. The worst cases will be when N is prime and large. For example 1999999817 for which the above code runs in around 10 seconds on my laptop.
Instead, factor N into its prime factors. Solve 2^c = 1 mod p^k for each prime factor (where p^k appears in the prime factorization of N. Then combine the results using the Chinese Remainder theorem.
When finding the c for a given prime power, if k=1, the solution is c=p-1. When k is larger, the details are quite messy, but you can find a written solution here: https://math.stackexchange.com/questions/1863037/discrete-logarithm-modulo-powers-of-a-small-prime
The problem is that you're overflowing, the int data type only has 32 bits, and overflows 2^31-1 , in this problem you don't need to keep the actual value of M, you can just keep the modulo of n.
while (M%N!=0){
c+=1;
M=M*2-1;
M%=N
}
Edit:In addition, you don't actually need more than N iterations to check if a 0 mod exists, as there are only N different mods to N and it just keeps cycling. so you also need to keep that in mind in case there is no 0 mod.
There is no doubt that the main problem with your code is signed integer overflow
I added a print of M whenever M was changed (i.e. cout << M << endl;) and gave it the input 29. This is what I got:
57
113
225
449
897
1793
3585
7169
14337
28673
57345
114689
229377
458753
917505
1835009
3670017
7340033
14680065
29360129
58720257
117440513
234881025
469762049
939524097
1879048193
-536870911
-1073741823
-2147483647
1
1
1
1
... endless loop
As you see you have signed integer overflow. That is undefined behavior in C so anything may happen!! On my machine I ended up with a nasty endless loop. That must be fixed before considering performance.
The simple fix is to add a line like
M = M % N;
whenever M is changed. See the answer from #Malek
Besides that you shall also use an unsigned integer, i.e. use uint32_t for all variables.
However, that will not improve performance.
If you still have performance issue after the above fix, you can try this instead:
uint32_t N;
cin >> N;
if (N%2==0) {
cout << -1;
return 0;
}
// Alternative algorithm
uint32_t R,c;
R = 1;
c = 1;
while (R != N){
R = 2*R + 1;
if (R > N) R = R - N;
++c;
}
cout << c;
On my laptop this algorithm is 2.5 times faster when testing on all odd numbers in the range 1..100000. However, it might not be sufficient for all numbers in the range 1..2*10^9.
Also notice the use of uint32_t to avoid integer overflow.
Related
My program prints all prime numbers from this expression:
((1 + sin(0.1*i))*k) + 1, i = 1, 2, ..., N.
Input Format:
No more than 100 examples. Every example has 2 positive integers on the same line.
Output Format:
Print each number on a separate line.
Sample Input:
4 10
500 100
Sample Output:
5
17
But my algorithm is not efficient enough. How can I add Sieve of Eratosthenes so it can be efficient enough to not print "Terminated due to timeout".
#include <iostream>
#include <cmath>
using namespace std;
int main() {
long long k, n;
int j;
while (cin >> k >> n) {
if (n>1000 && k>1000000000000000000) continue;
int count = 0;
for (int i = 1; i <= n; i++) {
int res = ((1 + sin(0.1*i)) * k) + 1;
for (j = 2; j < res; j++) {
if (res % j == 0) break;
}
if (j == res) count++;
}
cout << count << endl;
}
system("pause");
You can improve your speed by 10x simply by doing a better job with your trial division. You're testing all integers from 2 to res instead of treating 2 as a special case and testing just odd numbers from 3 to the square root of res:
// k <= 10^3, n <= 10^9
int main() {
unsigned k;
unsigned long long n;
while (cin >> k >> n) {
unsigned count = 0;
for (unsigned long long i = 1; i <= n; i++) {
unsigned long long j, res = (1 + sin(0.1 * i)) * k + 1;
bool is_prime = true;
if (res <= 2 || res % 2 == 0) {
is_prime = (res == 2);
} else {
for (j = 3; j * j <= res; j += 2) {
if (res % j == 0) {
is_prime = false;
break;
}
}
}
if (is_prime) {
count++;
}
}
cout << count << endl;
}
}
Though k = 500 and n = 500000000 is still going to take forty seconds or so.
EDIT: I added a 3rd mean to improve efficiency
EDIT2: Added an explanation why Sieve should not be the solution and some trigonometry relations. Moreover, I added a note on the history of the question
Your problem is not to count all the prime numbers in a given range, but only those which are generated by your function.
Therefore, I don't think that the Sieve of Eratosthenes is the solution for this particular exercise, for the following reason: n is always rather small while k can be very large. If kis very large, then the Sieve algorithm would have to generate a huge number of prime numbers, for finally use it for a small number of candidates.
You can improve the efficiency of you program by three means:
Avoid calculating sin(.) every time. You can use trigonometric relations for example. Moreover, first time you calculate these values, store them in an array and reuse these values. Calculation of sin()is very time consuming
In your test to check if a number is prime, limit the search to sqrt(res). Moreover, consider make the test with odd j only, plus 2
If a candidate res is equal to the previous one, avoid redoing the test
A few trigonometry
If c = cos(0.1) and s = sin(0.1), you can use the relations :
sin (0.1(i+1)) = s*cos (0.1*i) + c*sin(0.1*i))
cos (0.1(i+1)) = c*cos (0.1*i) - s*sin(0.1*i))
If n were large, it should be necessary to recalculate the sin() by the function regularly to avoid too much rounding error calculation. But it should not be the case here as n is always rather small.
However, as I mentioned, it is better to use only the "memorization" trick in a first step and check if it is enough.
A note on the history of this question and why this answer:
Recently, this site received several questions " how to improve my program, to count number of prime numbers generated by this k*sin() function ..." To my knowledge, these questions were all closed as duplicate, under the reason that the Sieve is the solution and was explained in a previous similar (but slightly different) question. Now, the same question reappeared under a slightly different form "How can I insert the Sieve algorithm in this program ... (with k*sin() again)". And then I realised that the Sieve is not the solution. It is not a criticism to previous closes as I made the same mistake in the understanding on the question. However, I think it is time to propose a new solution, even it is does not match the new question perfectly
When you make use of a simple Wheel factorization, you can obtain a very nice speedup of your code. Wheel factorization of order 2 makes use of the fact that all primes bigger than 3 can be written as 6n+1 or 6n+5 for natural n. This means that you only have to do 2 divisions per 6 numbers. Or even further, all primes bigger than 5 can be written as 30n+m, with m in {1,7,11,13,17,19,23,29}. ( 8 divisions per 30 numbers).
Using this simple principle, you can write the following function to test your primes (wheel {2,3}):
bool isPrime(long long num) {
if (num == 1) return false; // 1 is not prime
if (num < 4) return true; // 2 and 3 are prime
if (num % 2 == 0) return false; // divisible by 2
if (num % 3 == 0) return false; // divisible by 3
int w = 5;
while (w*w <= num) {
if(num % w == 0) return false; // not prime
if(num % (w+2) == 0) return false; // not prime
w += 6;
}
return true; // must be prime
}
You can adapt the above for the wheel {2,3,5}. This function can be used in the main program as:
int main() {
long long k, n;
while (cin >> k >> n) {
if (n>1000 && k>1000000000000000000) continue;
int count = 0;
for (int i = 1; i <= n; i++) {
long long res = ((1 + sin(0.1*i)) * k) + 1;
if (isPrime(res)) { count++; }
}
cout << count << endl;
}
return 0;
}
A simple timing gives me for the original code (g++ prime.cpp)
% time echo "6000 100000000" | ./a.out
12999811
echo "6000 100000000" 0.00s user 0.00s system 48% cpu 0.002 total
./a.out 209.66s user 0.00s system 99% cpu 3:29.70 total
while the optimized version gives me
% time echo "6000 100000000" | ./a.out
12999811
echo "6000 100000000" 0.00s user 0.00s system 51% cpu 0.002 total
./a.out 10.12s user 0.00s system 99% cpu 10.124 total
Other improvements can be made but might have minor effects:
precompute your sine-table sin(0.1*i) for i from 0 to 1000. This will avoid recomputing those sines over and over. This however, has a minor impact as most time is wasted on the primetest.
Checking if res(i) == res(i+1): this has barely any impact as, depending on n and k most consecutive res are not equal.
Use a lookup table, might be handier, this does have an impact.
original answer:
My suggestion is the following:
Precompute your sinetable sin(0.1*i) for i from 0 to 1000. This will avoid recomputing those sines over and over. Also, do it smart (see point 3)
Find the largest possible value of res which is res_max=(2*k)+1
Find all primes for res_max using the Sieve of Eratosthenes. Also, realize that all primes bigger than 3 can be written as 6n+1 or 6n+5 for natural n. Or even further, all primes bigger than 5 can be written as 30n+m, with m in {1,7,11,13,17,19,23,29}. This is what is called Wheel factorization. So do not bother checking any other number. (a tiny bit more info here)
Have a lookup table that states if a number is a prime.
Do all your looping over the lookup table.
I am working on a program in which I must print out the number of primes, including 1 and 239, from 1 - 239 ( I know one and or two may not be prime numbers, but we will consider them as such for this program) It must be a pretty simple program because we have only gone over some basics. So far my code is as such, which seems like decent logical flow to me, but doesnt produce output.
#include <iostream>
using namespace std;
int main()
{
int x;
int n = 1;
int y = 1;
int i = 0;
while (n<=239)
{x = n % y;
if (x = 0)
i++;
if (y < n)
y++;
n++;
while (i == 2)
cout << n;
}
return 0;
}
The way I want this to work is to take n, as long as n is 239 or less, and preform modulus division with every number from 1 leading up to n. Every time a number y goes evenly into n, a counter will be increased by 1. if the counter is equal to 2, then the number is prime and we print it to the screen. Any help would be so greatly appreciated. Thanks
std::cout << std::to_string(2) << std::endl;
for (unsigned int i = 3; i<240; i += 2) {
unsigned int j = 3;
int sq = sqrt(i);
for (; j <= sq; j += 2) if (!(i%j)) break;
if (j>sq) std::cout << std::to_string(i) << std::endl;
}
first of all, the prime definition: A prime number (or a prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself.
so you can skip all the even numbers (and hence ... i+=2).
Moreover no point to try to divide for a number greater than sqrt(i), because then it will have a divisor less than sqrt(i) and the code finds that and move to the next number.
Considering only odd numbers, means that we can skip even numbers as divisors (hence ... j+=2).
In your code there are clearly beginner errors, like (x = 0) instead of x==0. but also the logic doesn't convince. I agree with #NathanOliver, you need to learn to use a debugger to find all the errors. For the rest, good luck with the studies.
lets start with common errors:
first you want to take input from user using cin
cin>>n; // write it before starting your while loop
then,
if (x = 0)
should be:
if (x == 0)
change your second while loop to:
while (i == 2){
cout << n;
i++;
}
There are n numbers from 1 to n. I need to find the
∑gcd(i,n) where i=1 to i=n
for n of the range 10^7. I used euclid's algorithm for gcd but it gave TLE. Is there any efficient method for finding the above sum?
#include<bits/stdc++.h>
using namespace std;
typedef long long int ll;
int gcd(int a, int b)
{
return b == 0 ? a : gcd(b, a % b);
}
int main()
{
ll n,sum=0;
scanf("%lld",&n);
for(int i=1;i<=n;i++)
{
sum+=gcd(i,n);
}
printf("%lld\n",sum);
return 0;
}
You can do it via bulk GCD calculation.
You should found all simple divisors and powers of these divisors. This is possible done in Sqtr(N) complexity.
After required compose GCD table.
May code snippet on C#, it is not difficult to convert into C++
int[] gcd = new int[x + 1];
for (int i = 1; i <= x; i++) gcd[i] = 1;
for (int i = 0; i < p.Length; i++)
for (int j = 0, h = p[i]; j < c[i]; j++, h *= p[i])
for (long k = h; k <= x; k += h)
gcd[k] *= p[i];
long sum = 0;
for (int i = 1; i <= x; i++) sum += gcd[i];
p it is array of simple divisors and c power of this divisor.
For example if n = 125
p = [5]
c = [3]
125 = 5^3
if n = 12
p = [2,3]
c = [2,1]
12 = 2^2 * 3^1
I've just implemented the GCD algorithm between two numbers, which is quite easy, but I cant get what you are trying to do there.
What I read there is that you are trying to sum up a series of GCD; but a GCD is the result of a series of mathematical operations, between two or more numbers, which result in a single value.
I'm no mathematician, but I think that "sigma" as you wrote it means that you are trying to sum up the GCD of the numbers between 1 and 10.000.000; which doesnt make sense at all, for me.
What are the values you are trying to find the GCD of? All the numbers between 1 and 10.000.000? I doubt that's it.
Anyway, here's a very basic (and hurried) implementation of Euclid's GCD algorithm:
int num1=0, num2=0;
cout << "Insert the first number: ";
cin >> num1;
cout << "\n\nInsert the second number: ";
cin >> num2;
cout << "\n\n";
fflush(stdin);
while ((num1 > 0) && (num2 > 0))
{
if ((num1 - num2) > 0)
{
//cout << "..case1\n";
num1 -= num2;
}
else if ((num2 - num1) > 0)
{
//cout << "..case2\n";
num2 -= num1;
}
else if (num1 = num2)
{
cout << ">>GCD = " << num1 << "\n\n";
break;
}
}
A good place to start looking at this problem is here at the Online Encyclopedia of Integer Sequences as what you are trying to do is compute the sum of the sequence A018804 between 1 and N. As you've discovered approaches that try to use simple Euclid GCD function are too slow so what you need is a more efficient way to calculate the result.
According to one paper linked from the OEIS it's possible to rewrite the sum in terms of Euler's function. This changes the problem into one of prime factorisation - still not easy but likely to be much faster than brute force.
I had occasion to study the computation of GCD sums because the problem cropped up in a HackerEarth tutorial named GCD Sum. Googling turned up some academic papers with useful formulas, which I'm reporting here since they aren't mentioned in the MathOverflow article linked by deviantfan.
For coprime m and n (i.e. gcd(m, n) == 1) the function is multiplicative:
gcd_sum[m * n] = gcd_sum[m] * gcd_sum[n]
Powers e of primes p:
gcd_sum[p^e] = (e + 1) * p^e - e * p^(e - 1)
If only a single sum is to be computed then these formulas could be applied to the result of factoring the number in question, which would still be way faster than repeated gcd() calls or going through the rigmarole proposed by Толя.
However, the formulas could just as easily be used to compute whole tables of the function efficiently. Basically, all you have to do is plug them into the algorithm for linear time Euler totient calculation and you're done - this computes all GCD sums up to a million much faster than you can compute the single GCD sum for the number 10^6 by way of calls to a gcd() function. Basically, the algorithm efficiently enumerates the least factor decompositions of the numbers up to n in a way that makes it easy to compute any multiplicative function - Euler totient (a.k.a. phi), the sigmas or, in fact, GCD sums.
Here's a bit of hashish code that computes a table of GCD sums for smallish limits - ‘small’ in the sense that sqrt(N) * N does not overflow a 32-bit signed integer. IOW, it works for a limit of 10^6 (plenty enough for the HackerEarth task with its limit of 5 * 10^5) but a limit of 10^7 would require sticking (long) casts in a couple of strategic places. However, such hardening of the function for operation at higher ranges is left as the proverbial exercise for the reader... ;-)
static int[] precompute_Pillai (int limit)
{
var small_primes = new List<ushort>();
var result = new int[1 + limit];
result[1] = 1;
int n = 2, small_prime_limit = (int)Math.Sqrt(limit);
for (int half = limit / 2; n <= half; ++n)
{
int f_n = result[n];
if (f_n == 0)
{
f_n = result[n] = 2 * n - 1;
if (n <= small_prime_limit)
{
small_primes.Add((ushort)n);
}
}
foreach (int prime in small_primes)
{
int nth_multiple = n * prime, e = 1, p = 1; // 1e6 * 1e3 < INT_MAX
if (nth_multiple > limit)
break;
if (n % prime == 0)
{
if (n == prime)
{
f_n = 1;
e = 2;
p = prime;
}
else break;
}
for (int q; ; ++e, p = q)
{
result[nth_multiple] = f_n * ((e + 1) * (q = p * prime) - e * p);
if ((nth_multiple *= prime) > limit)
break;
}
}
}
for ( ; n <= limit; ++n)
if (result[n] == 0)
result[n] = 2 * n - 1;
return result;
}
As promised, this computes all GCD sums up to 500,000 in 12.4 ms, whereas computing the single sum for 500,000 via gcd() calls takes 48.1 ms on the same machine. The code has been verified against an OEIS list of the Pillai function (A018804) up to 2000, and up to 500,000 against a gcd-based function - an undertaking that took a full 4 hours.
There's a whole range of optimisations that could be applied to make the code significantly faster, like replacing the modulo division with a multiplication (with the inverse) and a comparison, or to shave some more milliseconds by way of stepping the ‘prime cleaner-upper’ loop modulo 6. However, I wanted to show the algorithm in its basic, unoptimised form because (a) it is plenty fast as it is, and (b) it could be useful for other multiplicative functions, not just GCD sums.
P.S.: modulo testing via multiplication with the inverse is described in section 9 of the Granlund/Montgomery paper Division by Invariant Integers using Multiplication but it is hard to find info on efficient computation of inverses modulo powers of 2. Most sources use the Extended Euclid's algorithm or similar overkill. So here comes a function that computes multiplicative inverses modulo 2^32:
static uint ModularInverse (uint n)
{
uint x = 2 - n;
x *= 2 - x * n;
x *= 2 - x * n;
x *= 2 - x * n;
x *= 2 - x * n;
return x;
}
That's effectively five iterations of Newton-Raphson, in case anyone cares. ;-)
you can use Seive to store lowest prime Factor of all number less than equal to 10^7
and the by by prime factorization of given number calculate your answer directly..
Given two numbers n and k, find x, 1 <= x <= k that maximises the remainder n % x.
For example, n = 20 and k = 10 the solution is x = 7 because the remainder 20 % 7 = 6 is maximum.
My solution to this is :
int n, k;
cin >> n >> k;
int max = 0;
for(int i = 1; i <= k; ++i)
{
int xx = n - (n / i) * i; // or int xx = n % i;
if(max < xx)
max = xx;
}
cout << max << endl;
But my solution is O(k). Is there any more efficient solution to this?
Not asymptotically faster, but faster, simply by going backwards and stopping when you know that you cannot do better.
Assume k is less than n (otherwise just output k).
int max = 0;
for(int i = k; i > 0 ; --i)
{
int xx = n - (n / i) * i; // or int xx = n % i;
if(max < xx)
max = xx;
if (i < max)
break; // all remaining values will be smaller than max, so break out!
}
cout << max << endl;
(This can be further improved by doing the for loop as long as i > max, thus eliminating one conditional statement, but I wrote it this way to make it more obvious)
Also, check Garey and Johnson's Computers and Intractability book to make sure this is not NP-Complete (I am sure I remember some problem in that book that looks a lot like this). I'd do that before investing too much effort on trying to come up with better solutions.
This problem is equivalent to finding maximum of function f(x)=n%x in given range. Let's see how this function looks like:
It is obvious that we could get the maximum sooner if we start with x=k and then decrease x while it makes any sense (until x=max+1). Also this diagram shows that for x larger than sqrt(n) we don't need to decrease x sequentially. Instead we could jump immediately to preceding local maximum.
int maxmod(const int n, int k)
{
int max = 0;
while (k > max + 1 && k > 4.0 * std::sqrt(n))
{
max = std::max(max, n % k);
k = std::min(k - 1, 1 + n / (1 + n / k));
}
for (; k > max + 1; --k)
max = std::max(max, n % k);
return max;
}
Magic constant 4.0 allows to improve performance by decreasing number of iterations of the first (expensive) loop.
Worst case time complexity could be estimated as O(min(k, sqrt(n))). But for large enough k this estimation is probably too pessimistic: we could find maximum much sooner, and if k is significantly greater than sqrt(n) we need only 1 or 2 iterations to find it.
I did some tests to determine how many iterations are needed in the worst case for different values of n:
n max.iterations (both/loop1/loop2)
10^1..10^2 11 2 11
10^2..10^3 20 3 20
10^3..10^4 42 5 42
10^4..10^5 94 11 94
10^5..10^6 196 23 196
up to 10^7 379 43 379
up to 10^8 722 83 722
up to 10^9 1269 157 1269
Growth rate is noticeably better than O(sqrt(n)).
For k > n the problem is trivial (take x = n+1).
For k < n, think about the graph of remainders n % x. It looks the same for all n: the remainders fall to zero at the harmonics of n: n/2, n/3, n/4, after which they jump up, then smoothly decrease towards the next harmonic.
The solution is the rightmost local maximum below k. As a formula x = n//((n//k)+1)+1 (where // is integer division).
waves hands around
No value of x which is a factor of n can produce the maximum n%x, since if x is a factor of n then n%x=0.
Therefore, you would like a procedure which avoids considering any x that is a factor of n. But this means you want an easy way to know if x is a factor. If that were possible you would be able to do an easy prime factorization.
Since there is not a known easy way to do prime factorization there cannot be an "easy" way to solve your problem (I don't think you're going to find a single formula, some kind of search will be necessary).
That said, the prime factorization literature has cunning ways of getting factors quickly relative to a naive search, so perhaps it can be leveraged to answer your question.
Nice little puzzle!
Starting with the two trivial cases.
for n < k: any x s.t. n < x <= k solves.
for n = k: x = floor(k / 2) + 1 solves.
My attempts.
for n > k:
x = n
while (x > k) {
x = ceil(n / 2)
}
^---- Did not work.
x = floor(float(n) / (floor(float(n) / k) + 1)) + 1
x = ceil(float(n) / (floor(float(n) / k) + 1)) - 1
^---- "Close" (whatever that means), but did not work.
My pride inclines me to mention that I was first to utilize the greatest k-bounded harmonic, given by 1.
Solution.
Inline with other answers I simply check harmonics (term courtesy of #ColonelPanic) of n less than k, limiting by the present maximum value (courtesy of #TheGreatContini). This is the best of both worlds and I've tested with random integers between 0 and 10000000 with success.
int maximalModulus(int n, int k) {
if (n < k) {
return n;
}
else if (n == k) {
return n % (k / 2 + 1);
}
else {
int max = -1;
int i = (n / k) + 1;
int x = 1;
while (x > max + 1) {
x = (n / i) + 1;
if (n%x > max) {
max = n%x;
}
++i;
}
return max;
}
}
Performance tests:
http://cpp.sh/72q6
Sample output:
Average number of loops:
bruteForce: 516
theGreatContini: 242.8
evgenyKluev: 2.28
maximalModulus: 1.36 // My solution
I'm wrong for sure, but it looks to me that it depends on if n < k or not.
I mean, if n < k, n%(n+1) gives you the maximum, so x = (n+1).
Well, on the other hand, you can start from j = k and go back evaluating n%j until it's equal to n, thus x = j is what you are looking for and you'll get it in max k steps... Too much, is it?
Okay, we want to know divisor that gives maximum remainder;
let n be a number to be divided and i be the divisor.
we are interested to find the maximum remainder when n is divided by i, for all i<n.
we know that, remainder = n - (n/i) * i //equivalent to n%i
If we observe the above equation to get maximum remainder we have to minimize (n/i)*i
minimum of n/i for any i<n is 1.
Note that, n/i == 1, for i<n, if and only if i>n/2
now we have, i>n/2.
The least possible value greater than n/2 is n/2+1.
Therefore, the divisor that gives maximum remainder, i = n/2+1
Here is the code in C++
#include <iostream>
using namespace std;
int maxRemainderDivisor(int n){
n = n>>1;
return n+1;
}
int main(){
int n;
cin>>n;
cout<<maxRemainderDivisor(n)<<endl;
return 0;
}
Time complexity: O(1)
As the title says, the task is:
Given number N eliminate K digits to get maximum possible number. The digits must remain at their positions.
Example: n = 12345, k = 3, max = 45 (first three digits eliminated and digits mustn't be moved to another position).
Any idea how to solve this?
(It's not homework, I am preparing for an algorithm contest and solve problems on online judges.)
1 <= N <= 2^60, 1 <= K <= 20.
Edit: Here is my solution. It's working :)
#include <iostream>
#include <string>
#include <queue>
#include <vector>
#include <iomanip>
#include <algorithm>
#include <cmath>
using namespace std;
int main()
{
string n;
int k;
cin >> n >> k;
int b = n.size() - k - 1;
int c = n.size() - b;
int ind = 0;
vector<char> res;
char max = n.at(0);
for (int i=0; i<n.size() && res.size() < n.size()-k; i++) {
max = n.at(i);
ind = i;
for (int j=i; j<i+c; j++) {
if (n.at(j) > max) {
max = n.at(j);
ind = j;
}
}
b--;
c = n.size() - 1 - ind - b;
res.push_back(max);
i = ind;
}
for (int i=0; i<res.size(); i++)
cout << res.at(i);
cout << endl;
return 0;
}
Brute force should be fast enough for your restrictions: n will have max 19 digits. Generate all positive integers with numDigits(n) bits. If k bits are set, then remove the digits at positions corresponding to the set bits. Compare the result with the global optimum and update if needed.
Complexity: O(2^log n * log n). While this may seem like a lot and the same thing as O(n) asymptotically, it's going to be much faster in practice, because the logarithm in O(2^log n * log n) is a base 10 logarithm, which will give a much smaller value (1 + log base 10 of n gives you the number of digits of n).
You can avoid the log n factor by generating combinations of n taken n - k at a time and building the number made up of the chosen n - k positions as you generate each combination (pass it as a parameter). This basically means you solve the similar problem: given n, pick n - k digits in order such that the resulting number is maximum).
Note: there is a method to solve this that does not involve brute force, but I wanted to show the OP this solution as well, since he asked how it could be brute forced in the comments. For the optimal method, investigate what would happen if we built our number digit by digit from left to right, and, for each digit d, we would remove all currently selected digits that are smaller than it. When can we remove them and when can't we?
In the leftmost k+1 digits, find the largest one (let us say it is located at ith location. In case there are multiple occurrences choose the leftmost one). Keep it. Repeat the algorithm for k_new = k-i+1, newNumber = i+1 to n digits of the original number.
Eg. k=5 and number = 7454982641
First k+1 digits: 745498
Best number is 9 and it is located at location i=5.
new_k=1, new number = 82641
First k+1 digits: 82
Best number is 8 and it is located at i=1.
new_k=1, new number = 2641
First k+1 digits: 26
Best number is 6 and it is located at i=2
new_k=0, new number = 41
Answer: 98641
Complexity is O(n) where n is the size of the input number.
Edit: As iVlad mentioned, in the worst case complexity can be quadratic. You can avoid that by maintaining a heap of size at most k+1 which will increase complexity to O(nlogk).
Following may help:
void removeNumb(std::vector<int>& v, int k)
{
if (k == 0) { return; }
if (k >= v.size()) {
v.clear();
return;
}
for (int i = 0; i != v.size() - 1; )
{
if (v[i] < v[i + 1]) {
v.erase(v.begin() + i);
if (--k == 0) { return; }
i = std::max(i - 1, 0);
} else {
++i;
}
}
v.resize(v.size() - k);
}