Array: mathematical sequence - c++

An array of integers A[i] (i > 1) is defined in the following way: an element A[k] ( k > 1) is the smallest number greater than A[k-1] such that the sum of its digits is equal to the sum of the digits of the number 4* A[k-1] .
You need to write a program that calculates the N th number in this array based on the given first element A[1] .
INPUT:
In one line of standard input there are two numbers seperated with a single space: A[1] (1 <= A[1] <= 100) and N (1 <= N <= 10000).
OUTPUT:
The standard output should only contain a single integer A[N] , the Nth number of the defined sequence.
Input:
7 4
Output:
79
Explanation:
Elements of the array are as follows: 7, 19, 49, 79... and the 4th element is solution.
I tried solving this by coding a separate function that for a given number A[k] calculates the sum of it's digits and finds the smallest number greater than A[k-1] as it says in the problem, but with no success. The first testing failed because of a memory limit, the second testing failed because of a time limit, and now i don't have any possible idea how to solve this. One friend suggested recursion, but i don't know how to set that.
Anyone who can help me in any way please write, also suggest some ideas about using recursion/DP for solving this problem. Thanks.

This has nothing to do with recursion and almost nothing with dynamic programming. You just need to find viable optimizations to make it fast enough. Just a hint, try to understand this solution:
http://codepad.org/LkTJEILz

Here is a simple solution in python. It only uses iteration, recursion is unnecessary and inefficient even for a quick and dirty solution.
def sumDigits(x):
sum = 0;
while(x>0):
sum += x % 10
x /= 10
return sum
def homework(a0, N):
a = [a0]
while(len(a) < N):
nextNum = a[len(a)-1] + 1
while(sumDigits(nextNum) != sumDigits(4 * a[len(a)-1])):
nextNum += 1
a.append(nextNum)
return a[N-1]
PS. I know we're not really supposed to give homework answers, but it appears the OP is in an intro to C++ class so probably doesn't know python yet, hopefully it just looks like pseudo code. Also the code is missing many simple optimizations which would probably make it too slow for a solution as is.

It is rather recursive.
The kernel of the problem is:
Find the smallest number N greater than K having digitsum(N) = J.
If digitsum(K) == J then test if N = K + 9 satisfies the condition.
If digitsum(K) < J then possibly N differs from K only in the ones digit (if the digitsum can be achieved without exceeding 9).
Otherwise if digitsum(K) <= J the new ones digit is 9 and the problem recurses to "Find the smallest number N' greater than (K/10) having digitsum(N') = J-9, then N = N'*10 + 9".
If digitsum(K) > J then ???
In every case N <= 4 * K
9 -> 18 by the first rule
52 -> 55 by the second rule
99 -> 189 by the third rule, the first rule is used during recursion
25 -> 100 requires the fourth case, which I had originally not seen the need for.
Any more counterexamples?

Related

Traversing a binary tree to get from one number to another using only two operations

I'm doing a problem that says that we have to get from one number, n, to another, m, in as few steps as possible, where each "step" can be 1) doubling, or 2) subtracting one. The natural approach is two construct a binary tree and run BFS since we are given that n, m are bounded by 0 ≤ n, m ≤ 104 and so the tree doesn't get that big. However, I ran into a stunningly short solution, and have no idea why it works. It basically goes from m … n instead, halving or adding one as necessary to decrease m until it is less than n, and then just adding to get up to n. Here is the code:
while(n<m){
if (m%2) m++;
else m /= 2;
count++;
}
count = count + n - m;
return count;
Is it obvious why this is necessarily the shortest path? I get that going from m … n is natural because n is lower bounded by zero and so the tree becomes more "finite" in some sense, but this method of modified halving until you get below the number, then adding up until you reach it, doesn't seem like it should necessarily always return the correct answer, yet it does. Why, and how might I have recognized this approach from the get-go?
You only have 2 available operations:
double n
subtract 1 from n
That means the only way to go up is to double and the only way to go down is to subtract 1.
If m is an even number, then you can land on it by doubling n when 2*n = m. Otherwise, you will have to subtract 1 as well (if 2*n = m + 1 then you will have to double n and then subtract 1).
If doubling n lands too far above m then you will have to subtract twice as many times than if you used the subtraction before doubling n.
example:
n = 12 and m = 20.
You can either double n and then subtract 4 times as in 12*2 -4 = 20. - 5 steps
Or you can subtract twice and then double n as in (12-2)*2 = 20. - 3 steps
You might be wondering 'How should I pick between doubling or subtracting when n < m/2?'.
The idea is to use a reccurence-based approach. You know that you want n to reach a value of v such as v = m/2 or v = (m+1)/2. In other words you want n to reach v... and the shortest way to do that is to reach a value v' such as v' = v/2 or v' = (v+1)/2 and so on.
example:
n = 2 and m = 21.
You want n to reach (21+1)/2 = 11 which means you want to reach (11+1)/2 = 6 and thus to reach 6/2=3 and thus to reach (3+1)/2 = 2.
Since n=2 you now know that the shortest path is: (((n*2-1)*2)*2-1)*2-1.
other example:
n = 14 and m = 22.
You want n to reach 22/2 = 11.
n is already above 11 so the shortest path is : (n-1-1-1)*2.
From here, you can see that the shortest path can be deduced without a binary tree.
On top of that, you have to think starting from m and going down to an obvious path for n. This implies that it will be easier to code an algorithm going from m to n than the opposite.
Using recurrence, this function achieves the same result:
function shortest(n, m) {
if (n >= m) return n-m; //only way to go down
if(m%2==0) return 1 + shortest(n, m/2); //if m is even => optimum goal is m/2
else return 2 + shortest(n, (m+1)/2);//else optimum goal is (m+1)/2 which necessitates 2 operations
}

Find how many numbers that are perfect squares and the sqrt() is a prime number in a L, R range

First off all, feel free to improve the formatting of the question
This question is already solved by Goswin von Brederlow!
Hello, i'm praticing programming and this problem came across and i don't know how is the best way to solve this:
The question have T test cases that consists in:
Given a range L and R, find how many numbers meet the constraints:
i) Number is a perfect square;
ii) The sqrt(Number) is a prime number.
Limits:
1 <= T <= 1e4
1 <= L, R <= 1e12
Time limit: 1 second
So, i tried it with a simple idea of pre-processing with an unodered_map(lld, bool) all the answers with the sieve of eratosthenes for each sqrt(Number) given that Number is a perfect square
After, I pass in range pow(sqrt(l), 2) and increments it the next odd number... like: 4 9 16 25, the difference are odd numbers: 5 7 9...
My code:
long long int l, r; scanf("%lld %lld", &l, &r);
long long int odd = ceil(sqrt(l)), n = odd*odd;
odd = (2*odd) + 1;
long long int ans = 0;
while((n >= l && n <= r)){
it = h.find(n);
ans = ans + it->second;
n = n + odd;
odd = odd + 2;
}
But a i still got TLE, i need some help guys, thank you!
The only question I see I infer from "i don't know how is the best way to solve this".
Your way seems to be pretty smart already. You have some bugs even in just the bit of code you pasted. You seem to compute the sum of the numbers instead of counting them.
One thing I could think of improving is the counting itself. Looks like you have a hash table of all the squares of primes and you simply test all auqares if they are in the table.
Why not make a balanced tree out of the primes? In each node you store the minimum and maximum number and the count for the subtree. The leaves would be (n, n, 1) where n is one of the squares of primes. Given L and R you can then go through the tree and sum up all subtrees that are in the interval and recurse into subtrees that are only partially inside. Ignore everything outside. That should add a logarithmic factor to your complexity.

A problem of taking combination for set theory

Given an array A with size N. Value of a subset of Array A is defined as product of all numbers in that subset. We have to return the product of values of all possible non-empty subsets of array A %(10^9+7).
E.G. array A {3,5}
` Value{3} = 3,
Value{5} = 5,
Value{3,5} = 5*3 = 15
answer = 3*5*15 %(10^9+7).
Can someone explain the mathematics behind the problem. I am thinking of solving it by combination to solve it efficiently.
I have tried using brute force it gives correct answer but it is way too slow.
Next approach is using combination. Now i think that if we take all the sets and multiply all the numbers in those set then we will get the correct answer. Thus i have to find out how many times a number is coming in calculation of answer. In the example 5 and 3 both come 2 times. If we look closely, each number in a will come same number of times.
You're heading in the right direction.
Let x be an element of the given array A. In our final answer, x appears p number of times, where p is equivalent to the number of subsets of A possible that include x.
How to calculate p? Once we have decided that we will definitely include x in our subset, we have two choices for the rest N-1 elements: either include them in set or do not. So, we conclude p = 2^(N-1).
So, each element of A appears exactly 2^(N-1) times in the final product. All remains is to calculate the answer: (a1 * a2 * ... * an)^p. Since the exponent is very large, you can use binary exponentiation for fast calculation.
As Matt Timmermans suggested in comments below, we can obtain our answer without actually calculating p = 2^(N-1). We first calculate the product a1 * a2 * ... * an. Then, we simply square this product n-1 times.
The corresponding code in C++:
int func(vector<int> &a) {
int n = a.size();
int m = 1e9+7;
if(n==0) return 0;
if(n==1) return (m + a[0]%m)%m;
long long ans = 1;
//first calculate ans = (a1*a2*...*an)%m
for(int x:a){
//negative sign does not matter since we're squaring
if(x<0) x *= -1;
x %= m;
ans *= x;
ans %= m;
}
//now calculate ans = [ ans^(2^(n-1)) ]%m
//we do this by squaring ans n-1 times
for(int i=1; i<n; i++){
ans = ans*ans;
ans %= m;
}
return (int)ans;
}
Let,
A={a,b,c}
All possible subset of A is ={{},{a},{b},{c},{a,b},{b,c},{c,a},{a,b,c,d}}
Here number of occurrence of each of the element are 4 times.
So if A={a,b,c,d}, then numbers of occurrence of each of the element will be 2^3.
So if the size of A is n, number of occurrence of eachof the element will be 2^(n-1)
So final result will be = a1^p*a2^pa3^p....*an^p
where p is 2^(n-1)
We need to solve x^2^(n-1) % mod.
We can write x^2^(n-1) % mod as x^(2^(n-1) % phi(mod)) %mod . link
As mod is a prime then phi(mod)=mod-1.
So at first find p= 2^(n-1) %(mod-1).
Then find Ai^p % mod for each of the number and multiply with the final result.
I read the previous answers and I was understanding the process of making sets. So here I am trying to put it in as simple as possible for people so that they can apply it to similar problems.
Let i be an element of array A. Following the approach given in the question, i appears p number of times in final answer.
Now, how do we make different sets. We take sets containing only one element, then sets containing group of two, then group of 3 ..... group of n elements.
Now we want to know for every time when we are making set of certain numbers say group of 3 elements, how many of these sets contain i?
There are n elements so for sets of 3 elements which always contains i, combinations are (n-1)C(3-1) because from n-1 elements we can chose 3-1 elements.
if we do this for every group, p = [ (n-1)C(x-1) ] , m going from 1 to n. Thus, p= 2^(n-1).
Similarly for every element i, p will be same. Thus we get
final answer= A[0]^p *A[1]^p...... A[n]^p

longest contiguous subsequence such that twice the number of zeroes is less than equal to thrice the number of ones

I was trying to solve this problem from hacker rank I tried the brute fore solution but it doesnt seem to work. Can some one gimme an idea to solve this problem efficiently.
https://www.hackerrank.com/contests/sep13/challenges/sherlock-puzzle
Given a binary string (S) which contains ‘0’s and ‘1’s and an integer K,
find the length (L) of the longest contiguous subsequence of (S * K) such that twice the number of zeroes is <= thrice the number of ones (2 * #0s <= 3 * #1s) in that sequence.
S * K is defined as follows: S * 1 = S
S * K = S + S * (K - 1)
Input Format
The first (and only) line contains an integer K and the binary string S separated by a single space.
Constraints
1 <= |S| <= 1,000,000
1 <= K <= 1,000,000
Output Format
A single integer L - the answer to the test case
Here's a hint:
Let's first suppose K = 1 and that S looks like (using a dot for 0):
..1...11...11.....111111....111....
e f b a c d
The key is to note that if the longest acceptable sequence contains a 1 it will also contain any adjacent ones. For example, if the longest sequence contains the 1 at a, it will also contain all of the ones between b and c (inclusive).
So you only have to analyze the sequence at the points where the blocks of ones are.
The main question is: if you start at a certain block of ones, can you make it to the next block of ones? For instance, if you start at e you can make it to the block at f but not to b. If you start at b you can make it to the block at d, etc.
Then generalize the analysis for K > 1.
Brute force obviously won't work since it's O((n * k) ** 2). I will use python style list comprehensions in this answer. You'll need an array t = [3 if el == "1" else - 2 for el in S]. Now if you use the p[i] = t[0] + ... + t[i] array you can see that in the k == 1 case you are basically looking for a pair (i, j), i < j such that p[j] - (p[i - 1] if i != 0 else 0) >= 0 is true and j - i is maximal among
these pairs. Now for each i in 0..n-1 you have to find find it's j pair such that the above is maximal. This can be done in O(log n) for a specific i so this gives and O(n log n) solution for the k == 1 case. This can be extended to an O(n log n) solution for the general case(there is a trick to find the largest block that can be covered). Also there is an O(n) solution to this problem but you need to further examine the p sequence for that. I don't suggest to write a solution in a scripting language though. Even the O(n) solution times out in python...

Dynamic programming algorithm N, K problem

An algorithm which will take two positive numbers N and K and calculate the biggest possible number we can get by transforming N into another number via removing K digits from N.
For ex, let say we have N=12345 and K=3 so the biggest possible number we can get by removing 3 digits from N is 45 (other transformations would be 12, 15, 35 but 45 is the biggest). Also you cannot change the order of the digits in N (so 54 is NOT a solution). Another example would be N=66621542 and K=3 so the solution will be 66654.
I know this is a dynamic programming related problem and I can't get any idea about solving it. I need to solve this for 2 days, so any help is appreciated. If you don't want to solve this for me you don't have to but please point me to the trick or at least some materials where i can read up more about some similar issues.
Thank you in advance.
This can be solved in O(L) where L = number of digits. Why use complicated DP formulas when we can use a stack to do this:
For: 66621542
Add a digit on the stack while there are less than or equal to L - K digits on the stack:
66621. Now, remove digits from the stack while they are less than the currently read digit and put the current digit on the stack:
read 5: 5 > 2, pop 1 off the stack. 5 > 2, pop 2 also. put 5: 6665
read 4: stack isnt full, put 4: 66654
read 2: 2 < 4, do nothing.
You need one more condition: be sure not to pop off more items from the stack than there are digits left in your number, otherwise your solution will be incomplete!
Another example: 12345
L = 5, K = 3
put L - K = 2 digits on the stack: 12
read 3, 3 > 2, pop 2, 3 > 1, pop 1, put 3. stack: 3
read 4, 4 > 3, pop 3, put 4: 4
read 5: 5 > 4, but we can't pop 4, otherwise we won't have enough digits left. so push 5: 45.
Well, to solve any dynamic programming problem, you need to break it down into recurring subsolutions.
Say we define your problem as A(n, k), which returns the largest number possible by removing k digits from n.
We can define a simple recursive algorithm from this.
Using your example, A(12345, 3) = max { A(2345, 2), A(1345, 2), A(1245, 2), A(1234, 2) }
More generally, A(n, k) = max { A(n with 1 digit removed, k - 1) }
And you base case is A(n, 0) = n.
Using this approach, you can create a table that caches the values of n and k.
int A(int n, int k)
{
typedef std::pair<int, int> input;
static std::map<input, int> cache;
if (k == 0) return n;
input i(n, k);
if (cache.find(i) != cache.end())
return cache[i];
cache[i] = /* ... as above ... */
return cache[i];
}
Now, that's the straight forward solution, but there is a better solution that works with a very small one-dimensional cache. Consider rephrasing the question like this: "Given a string n and integer k, find the lexicographically greatest subsequence in n of length k". This is essentially what your problem is, and the solution is much more simple.
We can now define a different function B(i, j), which gives the largest lexicographical sequence of length (i - j), using only the first i digits of n (in other words, having removed j digits from the first i digits of n).
Using your example again, we would have:
B(1, 0) = 1
B(2, 0) = 12
B(3, 0) = 123
B(3, 1) = 23
B(3, 2) = 3
etc.
With a little bit of thinking, we can find the recurrence relation:
B(i, j) = max( 10B(i-1, j) + ni , B(i-1, j-1) )
or, if j = i then B(i, j) = B(i-1, j-1)
and B(0, 0) = 0
And you can code that up in a very similar way to the above.
The trick to solving a dynamic programming problem is usually to figuring out what the structure of a solution looks like, and more specifically if it exhibits optimal substructure.
In this case, it seems to me that the optimal solution with N=12345 and K=3 would have an optimal solution to N=12345 and K=2 as part of the solution. If you can convince yourself that this holds, then you should be able to express a solution to the problem recursively. Then either implement this with memoisation or bottom-up.
The two most important elements of any dynamic programming solution are:
Defining the right subproblems
Defining a recurrence relation between the answer to a sub-problem and the answer to smaller sub-problems
Finding base cases, the smallest sub-problems whose answer does not depend on any other answers
Figuring out the scan order in which you must solve the sub-problems (so that you never use the recurrence relation based on uninitialized data)
You'll know that you have the right subproblems defined when
The problem you need the answer to is one of them
The base cases really are trivial
The recurrence is easy to evaluate
The scan order is straightforward
In your case, it is straightforward to specify the subproblems. Since this is probably homework, I will just give you the hint that you might wish that N had fewer digits to start off with.
Here's what i think:
Consider the first k + 1 digits from the left. Look for the biggest one, find it and remove the numbers to the left. If there exists two of the same biggest number, find the leftmost one and remove the numbers to the left of that. store the number of removed digits ( name it j ).
Do the same thing with the new number as N and k+1-j as K. Do this until k+1 -j equals to 1 (hopefully, it will, if i'm not mistaken).
The number you end up with will be the number you're looking for.